Installation for Spark

Follow instructions on spark doc:

installation inheriting from Hadoop cluster configuration

Inheriting from Hadoop cluster configuration should be the easiest way.

To make these files visible to Spark, set HADOOP_CONF_DIR in $SPARK_HOME/conf/spark-env.sh to a location containing the configuration file core-site.xml, usually /etc/hadoop/conf

installation not inheriting from Hadoop cluster configuration

Copy the seaweedfs-hadoop2-client-x.x.x.jar to all executor machines.

Add the following to spark/conf/spark-defaults.conf on every node running Spark

  1. spark.driver.extraClassPath /path/to/seaweedfs-hadoop2-client-x.x.x.jar
  2. spark.executor.extraClassPath /path/to/seaweedfs-hadoop2-client-x.x.x.jar

And modify the configuration at runntime:

  1. ./bin/spark-submit \
  2. --name "My app" \
  3. --master local[4] \
  4. --conf spark.eventLog.enabled=false \
  5. --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \
  6. --conf spark.hadoop.fs.seaweedfs.impl=seaweed.hdfs.SeaweedFileSystem \
  7. --conf spark.hadoop.fs.defaultFS=seaweedfs://localhost:8888 \
  8. myApp.jar