Raspberry Pi 2 cluster with NAT routing

Introduction

In this post I will describe how to build a 12-node cluster of Raspberry Pi 2, how to set it up and configure it with a NAT-routing, DHCP-serving head node, and some simple ways to manage the slaves. In order to minimise typing/copy-and-pasting, and to make everything as simple and automated as possible, all of the commands, scripts and config files associated with this post are available in an associated github repo, which the reader is encouraged to browse in conjunction with reading this post. All of the scripts are very short and simple (just a few lines), and could easily be entered directly at the command line. There is no hidden magic here.

Hardware

To give an idea of what I’m talking about, a picture of my finished cluster is given below. Click on it to view additional detail.

A 12-node Pi 2 cluster
A 12-node Pi 2 cluster

In the foreground are 12 Pis in off-the-shelf cases, stacked in 3 columns of 4. They are connected to a 16-port switch in the background via the regular Pi ethernet ports. They are powered by three 4-port USB chargers (one per column), which are resting on top of the switch. The head node has a USB ethernet dongle, which is in turn connected to an upstream switch, providing internet connectivity. In the picture, the head node is also connected to a display via an HDMI cable (and to a wireless keyboard), though once up and running, the display and keyboard are optional.

For this cluster you need: 12 x Raspberry Pi 2, 12 x (stackable) Pi 2 case, 12 x uSD card (64GB, Class 10), 12 x short patch cable, 12 x short USB to uUSB cable, 3 x 4-port, high power USB charger, 1 x 16 port switch (12 port would do), 1 x USB ethernet dongle, 1 x HDMI display and cable, 1 x USB keyboard, 1 x long patch cable for internet uplink, 1 x 6-way mains power adaptor/extension.

Software

I am using vanilla Raspbian as the base OS. Using an Ubuntu laptop, I flashed the 12 uSD cards one at a time with

sudo dd bs=32M if=2015-05-05-raspbian-wheezy.img of=/dev/mmcblk0
sync

It takes between 3 and 4 minutes per card. Before assembling the cluster, I booted up each card in turn in a Pi 2 to do some very basic initial config. On first boot “raspi-config” will run. I made 4 changes to the default settings. From the main settings I selected “Expand filesystem” and set “Overclock” to “Pi2”. From the advanced settings I changed the RAM split to “16” and enabled SSH. I then finished and rebooted to re-size the card, before logging back in (pi/raspberry) and doing a “shutdown -h now” and then powering down. Again, this took between 3 and 4 minutes per card.

After assembling the cluster, put the cards in all of the Pis, but don’t power up any of the Pis. Pick a head node and attach the ethernet dongle, the internet uplink, a keyboard and display, and then boot up just this head node. Assuming that your internet uplink will respond to DHCP requests made by the dongle, the Pi should auto-detect the fact that there is internet via the dongle (eth1) and not via the internal NIC (eth0), and configure itself appropriately. Check you have connectivity by logging in and doing a ping http://www.google.com or some such. Also change the password on this node to something secure (using passwd). Make sure you have internet connectivity on the head node before proceeding (ifconfig provides useful info).

Grab my code from github and install a few required packages with

wget https://github.com/darrenjw/blog/archive/master.zip
unzip master.zip
cd blog-master/pi-cluster
sudo sh install-packages

This could take around half an hour, and the node will reboot when it is finished. Log back in and return to the script directory to continue. Configure the network with

sudo sh setup-network

This will first configure the two network interfaces correctly, then set up a DHCP server to manage the network settings of the nodes on the internal network, and finally will set up iptables correctly for NAT routing of internal traffic from the worker nodes to the internet and back, ensuring that the LAN-side nodes can all connect to the internet properly. Again, this script will reboot when it is finished (it only takes a few seconds to run).

Once the head node comes back up, log back in, check you still have internet connectivity, and then return to the script directory.

Now boot up all of the other nodes. When they boot, they will send out DHCP requests which the head node should respond to, correctly configuring all of the nodes for internet connectivity. Wait at least 2-3 minutes for all of the nodes to have a chance to boot up properly.

Now on the head node run

sh setup-cluster

Note that this does not need to run as root. This will first generate SSH keys on the head node (for passwordless SSH). Just hit return (3 times) to prompts. Once it has generated the keys, it will scan the internal network to find the other nodes. It will store a list of workers in the file "workers.txt", which will be stored in both the script directory and the home directory. It will then copy the keys to each worker node in turn. For this, you will have to accept the connection and type in the password (raspberry) for each node in turn. This should be the last time you need to enter a password for the worker nodes. Finally, it will upgrade Raspbian on the workers and the workers will all reboot when they have finished upgrading. This last stage will take a long time (up to half an hour, depending on your internet connection). Once this command has completed, wait 2-3 minutes for the workers to reboot before continuing.

Note that the script setup-cluster calls three other scripts to do its work, and these scripts can be useful on their own. The script map-network scans the network to see what nodes are available, and stores them in the file workers.txt. You should re-run this whenever nodes are added or removed from the cluster. The script copy-keys copies the SSH keys to all of the workers. You should re-run this after adding new nodes to the cluster (after running map-network). Finally, the script upgrade-workers upgrades Raspbian on all of the worker nodes and then reboots them.

Finally, there is one more script, shutdown-workers which shuts down all of the workers in anticipation of the head node being shut down and the cluster being powered down.

You may be tempted to customise the worker nodes by logging in to them individually and giving them host names, etc. It’s best to resist this temptation. The Zen of cloud computing is to treat nodes like sheep and not like ponies! Don’t name them, don’t pick favourites, don’t treat them individually, and don’t care if the odd one occasionally goes missing. You might imagine that this can make it difficult to track down hardware problems with individual nodes, but it’s usually possible to track these down quite quickly by looking at indicator lights on the nodes and the network switch when the system is under load.

That’s it – you now have your own private cloud.

Use case: a standalone Spark cluster

There are lots of things that one can do with a cluster like this, but for illustration, I will now show how to use the cluster as a standalone Apache Spark cluster. In previous posts I have described how to install Spark on a Pi 2 and how to create a small Spark cluster. It may be worth quickly reviewing those posts before proceeding. On the head node run

cd
wget http://www.eu.apache.org/dist//spark/spark-1.4.1/spark-1.4.1-bin-hadoop2.6.tgz
tar xvfz spark-1.4.1-bin-hadoop2.6.tgz
cd spark-1.4.1-bin-hadoop2.6

to get and unpack a recent version of Spark. Then configure Spark appropriately with:

cp ~/workers.txt conf/slaves
echo '#!/usr/bin/env bash' > conf/spark-env.sh
echo "" >> conf/spark-env.sh
echo "SPARK_MASTER_IP=192.168.0.1" >> conf/spark-env.sh
echo "SPARK_WORKER_MEMORY=512m" >> conf/spark-env.sh

Having configured Spark on the head node (which will also act as the Spark Master), we can copy it to the workers with

parallel-scp -h ~/workers.txt -r -p 100 -t 0 /home/pi/spark-1.4.1-bin-hadoop2.6 /home/pi/spark-1.4.1-bin-hadoop2.6

Note that if you subsequently change the config, you don’t need to re-copy the entire Spark distribution. Just re-copy the conf directory with

parallel-scp -h ~/workers.txt -r -p 100 -t 0 /home/pi/spark-1.4.1-bin-hadoop2.6/conf /home/pi/spark-1.4.1-bin-hadoop2.6/conf

A simple Spark session can then be run with

sbin/start-all.sh
bin/spark-shell --master spark://192.168.0.1:7077

When Spark eventually starts up, you can enter the following into the Spark shell

sc.textFile("README.md").count

After exiting the Spark shell (Ctrl-D), you can shut everything down with

sbin/stop-all.sh

See my other posts for additional pointers and further reading.

Some useful links/references

The main relevant link for this post is that of the associated code on GitHub:

https://github.com/darrenjw/blog/tree/master/pi-cluster

If you are a Git person you might want to clone or fork this repo. The main other post I found useful for setting up the Pi as a NAT router was:

http://qcktech.blogspot.co.uk/2012/08/raspberry-pi-as-router.html

There was also an article on turning your Pi into a Wifi access point in Issue 11 of MagPi that I found somewhat useful.

Setting up a standalone Apache Spark cluster of Raspberry Pi 2

In the previous post I explained how to install Apache Spark in “local” mode on a Raspberry Pi 2. In this post I will explain how to link together a collection of such nodes into a standalone Apache Spark cluster. Here, “standalone” refers to the fact that Spark is managing the cluster itself, and that it is not running on top of Hadoop or some other cluster management solution.

I will assume at least two Raspberry Pi 2 nodes on the same local network, with identical Spark distributions installed in the same directory of the same user account on each node. See the previous post for instructions on how to do this. I will use two such nodes, with Spark installed under the user account spark.

First, you must decide on one of your nodes to be the master. I have two nodes, raspi08 and raspi09. I will set up raspi08 as the master. The spark account on the master needs to be able to SSH into the same account on all of the other nodes without the need to provide a password, so it makes sense to begin by setting up passwordless SSH. Log in to the spark account on the master and generate SSH keys with:

ssh-keygen

Just press return when asked for a password to keep it password free. Copy the identity to each other node. eg. to copy it to raspi09 I use:

ssh-copy-id spark@raspi09

You will obviously need to provide a password at this point. Once the identity is copied, SSH into each node to ensure that you can indeed connect without the need for a password.

Once passwordless SSH is set up and working, log into the master node to configure the Spark installation. Within the conf/ directory of the Spark installation, create a file called slaves and enter a list of all nodes you want to have as “workers”. eg. mine looks like:

raspi08
raspi09

Note that in my case raspi08 is listed as a worker despite also acting as master. That is perfectly possible. If you have plenty of nodes you might not want to do this, as the Pi 2 doesn’t really have quite enough RAM to be able to do this well, but since I have only two nodes, it seems like a good idea here.

Also within the conf/ directory, take a copy of the environment template:

cp spark-env.sh.template spark-env.sh

and then edit spark-env.sh according to your needs. I solved some obtuse Akka errors about workers not being able to connect back to the master by hardcoding the IP address of the Master node into the config file:

SPARK_MASTER_IP=192.168.1.177

You can find out the IP address of your node by running ifconfig. You should also set the memory that can be used by each worker node. This is a bit tricky, as RAM is a bit tight on the Pi 2. I went for 512MB, leaving nearly half a gig for the OS.

SPARK_WORKER_MEMORY=512m

Once you are done, the environment template (but not the slaves list) needs to be copied to each worker. eg.

scp spark-env.sh @raspi09:spark-1.3.0-bin-hadoop2.4/conf/

You shouldn’t need to supply a password…

At this point, you should be ready to bring up the cluster. You can bring up the master and workers all in one go with:

sbin/start-all.sh

When the master comes up, it starts a web service on port 8080. eg. I connect to it from any machine on the local network by pointing my browser at: http://raspi08:8080/

If a web page comes up, then the master is running. The page should give other diagnostic information, including a list of workers that have been brought up and are registered with the master. You can also access a lot of debugging info. If everything seems to look OK, make a note of the URL for the Spark master, which is displayed in large text at the top of the page. It should just be spark://192.168.1.177:7077 where the IP address is the IP address of your master node.

Try bringing up a spark shell on the master with:

bin/spark-shell --master spark://192.168.1.177:7077

Once the shell comes up, go back to your web browser and refresh the page to see the connection. Go back to the shell and try a simple test like:

sc.textFile("README.md").count

As usual, it is Ctrl-D to exit the shell. To bring down the cluster, use

sbin/stop-all.sh

Once you are happy that everything is working as it should, you probably want to reduce the amount of diagnostic debugging info that is echoed to the console. Do this by going back into the conf/ directory and copying the log4j template:

cp log4j.properties.template log4j.properties

and then editing log4j.properties. There is a line near the beginning of the file:

log4j.rootCategory=INFO, console

Change INFO to WARN so it reads:

log4j.rootCategory=WARN, console

Then when you next bring up the cluster, everything should be a bit less noisy.

That’s it. You’ve built a Spark cluster! Note that when accessing files from Spark scripts (and applications) it is assumed that the file exists in the same directory on every worker node. For testing purposes, it is easy enough to use scp before running the script to copy the files to all of the workers. But that is obviously somewhat unsatisfactory in the long term. Another possibility is to set up an NFS file server and mount it at the same mount point on each worker. Then make sure that any files you access are shared via the NFS file server. Even that solution isn’t totally satisfactory, due to the slow interconnect on the Pi 2. Ultimately, it would be better to set up a proper distributed file system such as Hadoop’s HDFS on your cluster and then share files via HDFS. That is how most production Spark clusters are set up. I may look at that in another post, but in the meantime check out the Spark standalone documentation for further information.