In this article by Rishi Yadav, the author of Spark Cookbook, we will cover the following recipes:
(For more resources related to this topic, see here.)
Apache Spark is a general-purpose cluster computing system to process big data workloads. What sets Spark apart from its predecessors, such as MapReduce, is its speed, ease-of-use, and sophisticated analytics.
Apache Spark was originally developed at AMPLab, UC Berkeley, in 2009. It was made open source in 2010 under the BSD license and switched to the Apache 2.0 license in 2013. Toward the later part of 2013, the creators of Spark founded Databricks to focus on Spark’s development and future releases.
Talking about speed, Spark can achieve sub-second latency on big data workloads. To achieve such low latency, Spark makes use of the memory for storage. In MapReduce, memory is primarily used for actual computation. Spark uses memory both to compute and store objects.
Spark also provides a unified runtime connecting to various big data storage sources, such as HDFS, Cassandra, HBase, and S3. It also provides a rich set of higher-level libraries for different big data compute tasks, such as machine learning, SQL processing, graph processing, and real-time streaming. These libraries make development faster and can be combined in an arbitrary fashion.
Though Spark is written in Scala, and this book only focuses on recipes in Scala, Spark also supports Java and Python.
Spark is an open source community project, and everyone uses the pure open source Apache distributions for deployments, unlike Hadoop, which has multiple distributions available with vendor enhancements.
The following figure shows the Spark ecosystem:
The Spark runtime runs on top of a variety of cluster managers, including YARN (Hadoop’s compute framework), Mesos, and Spark’s own cluster manager called standalone mode. Tachyon is a memory-centric distributed file system that enables reliable file sharing at memory speed across cluster frameworks. In short, it is an off-heap storage layer in memory, which helps share data across jobs and users. Mesos is a cluster manager, which is evolving into a data center operating system. YARN is Hadoop’s compute framework that has a robust resource management feature that Spark can seamlessly use.
Spark can be either built from the source code or precompiled binaries can be downloaded from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.
All the recipes in this book are developed using Ubuntu Linux but should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME environment variable to be set.
In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:
Directory | Description |
/bin | Essential command binaries |
/etc | Host-specific system configuration |
/opt | Add-on application software packages |
/var | Variable data |
/tmp | Temporary files |
/home | User home directories |
At the time of writing this, Spark’s current version is 1.4. Please check the latest version from Spark’s download page at http://spark.apache.org/downloads.html. Binaries are developed with a most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is to build from sources, which will be covered in the next recipe.
The following are the installation steps:
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.4.tgz
$ tar -zxf spark-1.4.0-bin-hadoop2.4.tgz
$ sudo mv spark-1.4.0-bin-hadoop2.4 spark
$ sudo mv spark/conf/* /etc/spark
$ sudo mkdir -p /opt/infoobjects
$ sudo mv spark /opt/infoobjects/
$ sudo chown -R root:root /opt/infoobjects/spark
$ sudo chmod -R 755 /opt/infoobjects/spark
$ cd /opt/infoobjects/spark
$ sudo ln -s /etc/spark conf
$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
$ sudo mkdir -p /var/log/spark
$ sudo chown -R hduser:hduser /var/log/spark
$ mkdir /tmp/spark
$ cd /etc/spark
$ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop"
>> spark-env.sh
$ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop"
>> spark-env.sh
$ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
$ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:
The following are the prerequisites for this recipe to work:
The following are the steps to build the Spark source code with Maven:
$ echo "export _JAVA_OPTIONS="-XX:MaxPermSize=1G"" >> /home/
hduser/.bashrc
$ wget https://github.com/apache/spark/archive/branch-1.4.zip
$ gunzip branch-1.4.zip
$ cd spark
$ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
$ sudo mv spark/conf /etc/
$ sudo mv spark /opt/infoobjects/spark
$ sudo chown -R root:root /opt/infoobjects/spark
$ sudo chmod -R 755 /opt/infoobjects/spark
$ cd /opt/infoobjects/spark
$ sudo ln -s /etc/spark conf
$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
$ sudo mkdir -p /var/log/spark
$ sudo chown -R hduser:hduser /var/log/spark
$ mkdir /tmp/spark
$ cd /etc/spark
$ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop"
>> spark-env.sh
$ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop"
>> spark-env.sh
$ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
$ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
In this article, we learned what Apache Spark is, how we can install Spark from binaries, and how to build Spark source code with Maven.
Further resources on this subject:
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…
Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…
Once we learn how to deploy an Ubuntu server, how to manage users, and how…
Key-takeaways: Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…
While developing a web application, or setting dynamic pages and meta tags we need to deal with…
Software architecture is one of the most discussed topics in the software industry today, and…