Cluster Hat setup - Part 1
Cluster Hat setup - Part 2
Cluster Hat setup - Part 3 - Ansible
Cluster Hat setup - Part 4 - Docker Registry
Cluster Hat setup - Part 5 - Access Point Setup
Cluster Hat - Other Processes
Set up a new Raspberry Pi 3 to join the cluster
Step 11 - Set up ssh keys to prepare for Ansible
Note: The first time I did the setup I set up the ssh keys here. The second time I set them up at an earlier stage. If you have already set up ssh keys you can skip this step.
I chose to use ssh keys for letting ansible connect to the servers.
So first I set up a public/private keypair on the controller:
-
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa
-
cat ~/.ssh/id_rsa.pub
Copy the output public key into the clipboard
Next I ssh to each of the pi's in the cluster and insall the ssh key:
(Repeat the following for each pi replacing pX with 1-4)
This will
log into ssh
change the password away from raspberry
logout of the pi's ssh
setup the authroized_keys file
login again and confirm that this time we don't need a password
logout of the pi's ssh again
-
ssh pX
-
*TYPE PASSWORD*
-
passwd
-
*Change password to something better than raspberry - can be long and complicated as we should never need it again*
-
exit
-
ssh-copy-id -i ~/.ssh/id_rsa.pub pX
-
**Enter password when prompted**
-
ssh pX
-
exit
Step 12 - Setup Ansible
I recomend watching this video https://www.youtube.com/watch?v=ZNB1at8mJWY for an introduction.
Rather than manage each Pi Zero in the cluster individually I want to use Ansible.
I will install Ansible on the controller machine then Ansible will preform tasks on the Pi Zero's in the cluster.
First log into the controller and install Ansible
-
sudo apt-get install python-pip git python-dev sshpass
-
sudo pip install markupsafe
-
sudo pip install ansible
Step 13 - Test out a few Ansible commands
Before testing out these commands don't forget to turn the cluster servers on!
I had some problems with host key checking. Rather than turn off host key checking in ansible I decided to connect once to the host with the full domain name and add the key to the known_hosts file.
-
ssh p1.metcarob-local.com
-
ssh p2.metcarob-local.com
-
ssh p3.metcarob-local.com
-
ssh p4.metcarob-local.com
First I created an ansible directory and setup a hosts file on the controller (~/ansible/hosts):
-
[clusternodes]
-
p[1:4].metcarob-local.com
-
-
[clusternodes:vars]
-
ansible_ssh_user=pi
Then I played with a few one off commands being run on the servers:
-
ansible -i ~/ansible/hosts clusternodes --list-hosts
-
-
ansible -i ~/ansible/hosts clusternodes -m ping
-
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "date"
-
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "cat /var/log/syslog | grep ntp"
-
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "cat /var/log/syslog | grep ntp"
-
-
ansible -i ~/ansible/hosts clusternodes -m setup
sudo with ansible
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "ls /" -s
one at a time with ansible
(Default is 5 at a time)
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "ls /" -s --forks=1
sudo with ansible
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "ls /" -s
one at a time with ansible
(Default is 5 at a time)
-
ansible -i ~/ansible/hosts clusternodes -m shell -a "ls /" -s --forks=1
Step 14 - Configuration as code
I would like to use GitHub to store the code for my configuration setup code for the cluster. I will put a link to the public repo for reference but if you are following this tutorial you will not need to clone this repo. (https://github.com/rmetcalf9/metcarob-local_cluster)
I added the public key for the pi user on the controller to the ssh keys in my github account.
I will use the ~/ansible directory on the controller for the repo. I ran the following on the controller:
-
git config --global user.email "you@example.com"
-
git config --global user.name "Your Name"
-
-
cd ~/ansible
-
echo "# metcarob-local_cluster" >> README.md
-
git init
-
git add README.md
-
git add hosts
-
git commit -m "first commit"
-
git remote add origin git@github.com:YOURUSER/YOURREPO.git
-
git push -u origin master
As I continue through this tutorial I update this repo but I will not write the commands to do this as I go.
Step 15 - Playbook 001 - Run command
As I mentioned before I want to use ansible to do an update and upgrade on each node in the cluster.
I created the following playbook:
~/ansible/hosts apt_upgrade.yml:
-
---
-
- hosts: clusternodes
-
remote_user: pi
-
become: true
-
-
tasks:
-
- name: Update and upgrade apt packages
-
become: true
-
apt:
-
upgrade: yes
-
update_cache: yes
-
cache_valid_time: 86400 #One day
This can be run with the command:
-
cd ~/ansible
-
ansible-playbook -i ~/ansible/hosts apt_upgrade.yml
I also would like to build a playbook to install docker onto each Pi in the cluster. I am leraning ansible so I will do this one step at a time. My first setp will be to build a simple playbook that does nothing.
~/ansible/install_docker.yml:
-
---
-
- hosts: clusternodes
-
tasks:
-
- name: install docker
-
shell: echo "Hello World"
-
changed_when: false
This can be run with the command:
-
cd ~/ansible
-
ansible-playbook -i ~/ansible/hosts install_docker.yml
Step 16 - Playbook 002 - Install and Run shell script
First create a script ~/ansible/install_docker.sh
-
#!/bin/bash
-
-
echo "Test script ${0} - Params {1} {2}"
-
-
exit 0
Then I changed install_docker.yml to
-
---
-
- hosts: clusternodes
-
tasks:
-
- name: install docker
-
script: install_docker.sh "PARAM001" "PARAM002"
-
changed_when: false
I changed the script to exit 1 as a test and confirmed that ansible reported an error.
Step 17 - Playbook 003 - Create a conditional step that is only called if a certain file exists
Change the playbook so it will only try and install docker if a particular file dosen't exists. (Later I will change it to a file I know is created by the docker install)
file ~/ansible/install_docker.yml:
-
---
-
- hosts: clusternodes
-
tasks:
-
- name: Check that the somefile.conf exists
-
stat:
-
path: ~/some_file_i_dont_know_yet
-
register: stat_result
-
-
- name: install docker
-
script: install_docker.sh "PARAM001" "PARAM002"
-
when: stat_result.stat.exists == False
Step 18 - Playbook 004 - Installing Docker
I want to test installing Docker but I will test it on one Pi in the cluster rather than running on all 4. I have created a group in the hosts file called "gunieapig"
-
[clusternodes]
-
p[1:4].metcarob-local.com
-
-
[clusternodes:vars]
-
ansible_ssh_user=pi
-
-
[gunieapig]
-
p1.metcarob-local.com
I also changed the hosts line of the install_docker.yml playbook to gunieapig. This was for testing only but I then switched it back when it was working.
The script to install docker is ~/ansible/install_docker.sh
-
#!/bin/bash
-
-
curl -sSL https://get.docker.com | sh
-
-
if [[ $? -ne 0 ]]
-
then
-
exit 1
-
fi
-
-
exit 0
We will need to run our docker servers against an insecure registry so we need to update the command line to run the doker instance:
~/ansible/overlay.conf
-
[Service]
-
ExecStart=
-
ExecStart=/usr/bin/dockerd --storage-driver overlay -H fd:// --insecure-registry controller.metcarob-local.com:5000
~/ansible/install_docker.yml
-
---
-
- hosts: clusternodes
-
remote_user: pi
-
become: true
-
-
tasks:
-
- name: Check that the somefile.conf exists
-
stat:
-
path: /etc/docker
-
register: stat_result
-
-
- name: install docker
-
script: install_docker.sh
-
when: stat_result.stat.exists == False
-
-
- group:
-
name: docker
-
state: present
-
notify:
-
- restart docker
-
-
- user:
-
name: pi
-
groups: docker
-
append: yes
-
notify:
-
- restart docker
-
-
- name: Create docker service directory
-
file:
-
path: /etc/systemd/system/docker.service.d
-
state: directory
-
mode: "u=rw,g=r,o=r"
-
-
- template:
-
src: overlay.conf
-
dest: /etc/systemd/system/docker.service.d/overlay.conf
-
owner: root
-
group: root
-
mode: "u=rw,g=r,o=r"
-
notify:
-
- restart docker
-
-
# - debug:
-
# msg: "FORCING restart of docker for testing"
-
# changed_when: true
-
# notify:
-
# - restart docker
-
-
handlers:
-
- name: restart docker
-
systemd:
-
state: restarted
-
daemon_reload: yes
-
name: docker
The above playbook could be more efficient because it installs docker, starts it, adds the group then restarts docker. Of course it would be better to add the group and eliminate the need to restart docker but I wanted to experiment with ansible handlers and this is a learning project for me. I may optimise it later if I feel the need.
I successfully installed docker on all 4 Pi Zero's in my cluster. I had a problem with one of them not being able to install docker due to a problem with apt. Rather than solve the problem I was able to wipe the SD card and restore it back to origional settings. Taking any machine out of a cluster and re-imaging it is one of the advantages of having a cluster!
I also noticed .retry files in the directory so I added a .gitignore file.
Step 19 - Create a webserver image on controller
(I got some information for this part from https://sreeninet.wordpress.com/2016/02/21/docker-on-raspberry-pi/)
Install docker on the controller:
-
curl -sSL https://get.docker.com | sh
-
sudo groupadd docker
-
sudo gpasswd -a ${USER} docker
-
sudo service docker restart
Note: You will need to exit and restart the ssh session for the group to take effect.
You can check it is working by running:
-
docker version
If you get nothing back then docker didn't install
If you get client version info but then "Cannot connect to the Docker daemon. Is the docker daemon running on this host?" it means the docker group hasn't taken.
Otherwise you will get client version and server version info.
Create a directory where we can build our docker images
-
mkdir ~/dockerbuild
-
mkdir ~/dockerbuild/apachepi
-
cd ~/dockerbuild/apachepi
When it starts Apache checks for a pid file and refuses to run when it is present. This means that when apache is not stopped properly it will refuse to start again. To get around this we create a script to delete this file before running apache. This script will run inside our docker container
Put the file in (~/dockerbuild/apachepi/apache2-foreground)
-
#!/bin/bash
-
set -e
-
-
# Apache gets grumpy about PID files pre-existing
-
rm -f /var/run/apache2/apache2.pid
-
-
exec /usr/sbin/apache2ctl -DFOREGROUND
make sure this is executable:
-
chmod +x ~/dockerbuild/apachepi/apache2-foreground
Create a docker file to build an apache image: (~/dockerbuild/apachepi/Dockerfile)
-
FROM resin/rpi-raspbian
-
MAINTAINER Robert Metcalf
-
-
# Update
-
RUN apt-get update
-
-
# Install apache2
-
RUN apt-get install -y apache2
-
-
COPY apache2-foreground /usr/local/bin/
-
-
EXPOSE 80
-
CMD ["/usr/local/bin/apache2-foreground"]
Build the image
-
docker build -t controller.metcarob-local.com:5000/apachepi:v1 .
Once it's complete you can check the image has been created:
-
docker images
The command to start the image running is:
-
docker run --name apachepi_container -p 8080:80 -d controller.metcarob-local.com:5000/apachepi:v1
This will run the webserver on the controller on port 8080. You can change 8080 to any port. I name the instance apachepi_container.
The command to check it's running is:
-
docker ps -a
We can also log go to any computer connected to the wifi network and browse for the controller (for me http://192.168.1.200:8080) and see the start page.
Docker will run the server in it's own area and we can get a shell into this area with the command:
-
docker exec -i -t apachepi_container /bin/bash
(exit will bring us back to the controller)
Once the instance exists it can be started and stopped with
-
docker stop apachepi_container
-
docker start apachepi_container
When the instance is stopped it still exists and can be deleted using:
-
docker rm -f apachepi_container
This gets us a webserver running on the cluster controller. There are two problems with what we have built so far:
- We want the webserver to run on the cluster machines not the controller
- We can't access the cluster machines from outside the cluster
I will address these problems next.
Next Part
In the next part I will setup a private docker registry on the controller and get the cluster nodes to pull docker and run docker images from the private registry.
Cluster Hat setup - Part 4 - Docker Registry