7 min read

In this article by Rishi Yadav, the author of Spark Cookbook, we will cover the following recipes:

  • Installing Spark from binaries
  • Building the Spark source code with Maven

(For more resources related to this topic, see here.)

Introduction

Apache Spark is a general-purpose cluster computing system to process big data workloads. What sets Spark apart from its predecessors, such as MapReduce, is its speed, ease-of-use, and sophisticated analytics.

Apache Spark was originally developed at AMPLab, UC Berkeley, in 2009. It was made open source in 2010 under the BSD license and switched to the Apache 2.0 license in 2013. Toward the later part of 2013, the creators of Spark founded Databricks to focus on Spark’s development and future releases.

Talking about speed, Spark can achieve sub-second latency on big data workloads. To achieve such low latency, Spark makes use of the memory for storage. In MapReduce, memory is primarily used for actual computation. Spark uses memory both to compute and store objects.

Spark also provides a unified runtime connecting to various big data storage sources, such as HDFS, Cassandra, HBase, and S3. It also provides a rich set of higher-level libraries for different big data compute tasks, such as machine learning, SQL processing, graph processing, and real-time streaming. These libraries make development faster and can be combined in an arbitrary fashion.

Though Spark is written in Scala, and this book only focuses on recipes in Scala, Spark also supports Java and Python.

Spark is an open source community project, and everyone uses the pure open source Apache distributions for deployments, unlike Hadoop, which has multiple distributions available with vendor enhancements.

The following figure shows the Spark ecosystem:

The Spark runtime runs on top of a variety of cluster managers, including YARN (Hadoop’s compute framework), Mesos, and Spark’s own cluster manager called standalone mode. Tachyon is a memory-centric distributed file system that enables reliable file sharing at memory speed across cluster frameworks. In short, it is an off-heap storage layer in memory, which helps share data across jobs and users. Mesos is a cluster manager, which is evolving into a data center operating system. YARN is Hadoop’s compute framework that has a robust resource management feature that Spark can seamlessly use.

Installing Spark from binaries

Spark can be either built from the source code or precompiled binaries can be downloaded from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.

Getting ready

All the recipes in this book are developed using Ubuntu Linux but should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME environment variable to be set.

In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:

Directory

Description

/bin

Essential command binaries

/etc

Host-specific system configuration

/opt

Add-on application software packages

/var

Variable data

/tmp

Temporary files

/home

User home directories

How to do it…

At the time of writing this, Spark’s current version is 1.4. Please check the latest version from Spark’s download page at http://spark.apache.org/downloads.html. Binaries are developed with a most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is to build from sources, which will be covered in the next recipe.

The following are the installation steps:

  1. Open the terminal and download binaries using the following command:
    $ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.4.tgz
  2. Unpack binaries:
    $ tar -zxf spark-1.4.0-bin-hadoop2.4.tgz
  3. Rename the folder containing binaries by stripping the version information:
    $ sudo mv spark-1.4.0-bin-hadoop2.4 spark
  4. Move the configuration folder to the /etc folder so that it can be made a symbolic link later:
    $ sudo mv spark/conf/* /etc/spark
  5. Create your company-specific installation directory under /opt. As the recipes in this book are tested on infoobjects sandbox, we are going to use infoobjects as directory name. Create the /opt/infoobjects directory:
    $ sudo mkdir -p /opt/infoobjects
  6. Move the spark directory to /opt/infoobjects as it’s an add-on software package:
    $ sudo mv spark /opt/infoobjects/
  7. Change the ownership of the spark home directory to root:
    $ sudo chown -R root:root /opt/infoobjects/spark
  8. Change permissions of the spark home directory, 0755 = user:read-write-execute group:read-execute world:read-execute:
    $ sudo chmod -R 755 /opt/infoobjects/spark
  9. Move to the spark home directory:
    $ cd /opt/infoobjects/spark
  10. Create the symbolic link:
    $ sudo ln -s /etc/spark conf
  11. Append to PATH in .bashrc:
    $ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
  12. Open a new terminal.
  13. Create the log directory in /var:
    $ sudo mkdir -p /var/log/spark
  14. Make hduser the owner of the Spark log directory.
    $ sudo chown -R hduser:hduser /var/log/spark
  15. Create the Spark tmp directory:
    $ mkdir /tmp/spark
  16. Configure Spark with the help of the following command lines:
    $ cd /etc/spark
    $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop"
    >> spark-env.sh
    $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop"
    >> spark-env.sh
    $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
    $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh

Building the Spark source code with Maven

Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:

  • Compiling for a specific Hadoop version
  • Adding the Hive integration
  • Adding the YARN integration

Getting ready

The following are the prerequisites for this recipe to work:

  • Java 1.6 or a later version
  • Maven 3.x

How to do it…

The following are the steps to build the Spark source code with Maven:

  1. Increase MaxPermSize for heap:
    $ echo "export _JAVA_OPTIONS="-XX:MaxPermSize=1G"" >> /home/
    hduser/.bashrc
  2. Open a new terminal window and download the Spark source code from GitHub:
    $ wget https://github.com/apache/spark/archive/branch-1.4.zip
  3. Unpack the archive:
    $ gunzip branch-1.4.zip
  4. Move to the spark directory:
    $ cd spark
  5. Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
    $ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
  6. Move the conf folder to the etc folder so that it can be made a symbolic link:
    $ sudo mv spark/conf /etc/
  7. Move the spark directory to /opt as it’s an add-on software package:
    $ sudo mv spark /opt/infoobjects/spark
  8. Change the ownership of the spark home directory to root:
    $ sudo chown -R root:root /opt/infoobjects/spark
  9. Change the permissions of the spark home directory 0755 = user:rwx group:r-x world:r-x:
    $ sudo chmod -R 755 /opt/infoobjects/spark
  10. Move to the spark home directory:
    $ cd /opt/infoobjects/spark
  11. Create a symbolic link:
    $ sudo ln -s /etc/spark conf
  12. Put the Spark executable in the path by editing .bashrc:
    $ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
  13. Create the log directory in /var:
    $ sudo mkdir -p /var/log/spark
  14. Make hduser the owner of the Spark log directory:
    $ sudo chown -R hduser:hduser /var/log/spark
  15. Create the Spark tmp directory:
    $ mkdir /tmp/spark
  16. Configure Spark with the help of the following command lines:
    $ cd /etc/spark
    $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop"
    >> spark-env.sh
    $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop"
    >> spark-env.sh
    $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
    $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh

Summary

In this article, we learned what Apache Spark is, how we can install Spark from binaries, and how to build Spark source code with Maven.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here