CloudFoundry: Installing a local BOSH Director on Ubuntu using VirtualBox

BOSH is a project that unifies release, deployment, and lifecycle management of cloud based software.

In this article I will describe how to install BOSH unto VirtualBox.  This implementation is often called “BOSH Lite” because it internally uses containers to emulate VMs.

Prerequisites

Install VirtualBox

The first step is to install Oracle VirtualBox 5.1+ as the type 2 hypervisor, Vagrant is not required.  I have written up full details for Ubuntu 14.04/16.04 in this article.

You will also want to install the utilities below to assist in your workflow before moving forward.  Spiff is useful for yaml comparisons and yq is a yaml parser.

Install Spiff

# grab archive
wget https://github.com/cloudfoundry-incubator/spiff/releases/download/v1.0.8/spiff_linux_amd64.zip

sudo unzip spiff_linux_amd64.zip -d /usr/local/bin

Install yq

Source, download page, and docs.

# grab archive
wget https://github.com/mikefarah/yq/releases/download/1.14.0/yq_linux_amd64

# make available on path
sudo chown root:root yq_linux_amd64
sudo chmod ugo+r+x yq_linux_amd64
sudo mv yq_linux_amd64 /usr/local/bin/yq

Install BOSH CLI v2

Install the Ubuntu prequisite packages:

sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev libyaml-dev libsqlite3-dev sqlite3

The BOSH CLI v2 can be downloaded as a single binary from the download page.

# get archive
wget https://github.com/cloudfoundry/bosh-cli/releases/download/v5.4.0/bosh-cli-5.4.0-linux-amd64

# make available on path
mv bosh-cli-* bosh
chmod ugo+r+x bosh
sudo chown root:root bosh
sudo mv bosh /usr/local/bin/bosh
bosh --version

Now check the BOSH status.

$ bosh env
Expected non-empty Director URL
Exit code 1

BOSH will report that no no director is set, which is expected.

Creating BOSH Director

We will use the BOSH CLI “create-env” command to have VirtualBox  create the Director VM as described in the official documentation.

First we pull down my convenience scripts from github, then the standard BOSH deployment project from github.

sudo apt-get install git -y

# grab my github project with convenience scripts
git clone https://github.com/fabianlee/vbox-create-bosh-env.git
cd vbox-create-bosh-env

# as subfolder, grab standard bosh-deployment script
git clone https://github.com/cloudfoundry/bosh-deployment.git

The next step is running “bosh create-env” with the proper operations files, variables files, and variables.

Before doing this, modify the “vars.yml” to reflect the number of CPU, RAM Mb, and disk Mb you want the Director (a single VM) to use.  These values are used to populate the “override-size.yml” operations file.

Then you can either use my convenience script and call:

./do-env.sh create-env

OR you can run ‘create-env’ yourself:

bosh create-env bosh.yml \
 --state ./state.json \
 --vars-store ./creds.yml \ 
 -o bosh-deployment/virtualbox/cpi.yml \
 -o bosh-deployment/virtualbox/outbound-network.yml \
 -o bosh-deployment/bosh-lite.yml \
 -o bosh-deployment/bosh-lite-runc.yml \
 -o bosh-deployment/uaa.yml \
 -o bosh-deployment/credhub.yml \
 -o bosh-deployment/jumpbox-user.yml \
 -o override-size.yml \
 -v director_name="bosh-lite" \
 -v internal_ip=192.168.50.6 \
 -v internal_gw=192.168.50.1 \
 -v internal_cidr=192.168.50.0/24 \
 -v outbound_network_name=NatNetwork \
 --vars-file vars.yml \

This will create a single VM in VirtualBox, accessible at the IP address 192.168.50.6.  The disks for this VM will be created in both the default storage location for VirtualBox as well as “~/.bosh_virtualbox_cpi”.

The file “creds.yml”  and “state.json” are generated by create-env and contain the certificate/keys to the installation and inventory of the installation respectively.

SSH access

The “jumpbox-user.yml” operations file used earlier makes it possible  to ssh directly into the BOSH Lite VM using the following commands.

# get key from creds file
bosh int creds.yml --path /jumpbox_ssh/private_key > jumpbox.key
chmod 600 jumpbox.key

# ssh into Director
ssh jumpbox@192.168.50.6 -i jumpbox.key

# look at ports being listened on (25555 for CLI)
netstat -an | grep "LISTEN "

# exit jumpbox access
exit

Set alias and environment context

Setup an alias and environment context for the Director at 192.168.50.6 either by using my convenience script:

source ./bosh-alias.sh

bosh login

OR you can manually type the commands:

# set 'vbox' alias
bosh alias-env vbox -e 192.168.50.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)

# list current envs
bosh envs

# set credentials and env context to 'vbox'
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=$(bosh int ./creds.yml --path /admin_password)
export BOSH_ENVIRONMENT=vbox
# test login
bosh login

Successfully authenticated with UAA
Succeeded

Set cloud config

The global cloud config can be set to the default provided.

bosh update-cloud-config bosh-deployment/warden/cloud-config.yml

Set runtime config

The runtime config should at minimal be set with the native DNS support which comes as default in BOSH (‘local_dns’ in bosh.yml).

bosh update-runtime-config bosh-deployment/runtime-configs/dns.yml --name dns

Modify network routes

Before moving on, modify the local network route table so that the containers (which will be located on 10.244.0.0) are accessible.  Below is the Ubuntu command to send any traffic bound for 10.244.0.0/16 into this BOSH-lite VM at 192.168.50.4.

# list current routes
route -n

# add route to containers
sudo route add -net 10.244.0.0/16 gw 192.168.50.6

Here is a script that shows how to do the same for Mac and other Linux flavors.

Test deployment

We will use Apache Zookeeper as a way to test BOSH.

# upload Ubuntu stemcell needed by ZooKeeper
bosh upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=3586.60

# get ZooKeeper manifest
wget https://raw.githubusercontent.com/fabianlee/bosh-director-on-aws/master/zookeeper.yml -O zookeeper.yml

# deploy ZooKeeper with 1 instance in cluster
bosh -d zookeeper deploy -v zookeeper_instances=1 zookeeper.yml

# show avail deployments (zookeeper should exist now)
bosh deployments

# show VMs (should be 1)
bosh vms

If all the VMs are in a “running” state that is a good indication that all worked.  For a smoke test, run the following BOSH errand which is included in the ZooKeeper release:

bosh -d zookeeper run-errand smoke-tests

If you want another deep test, ssh into one of the ZooKeeper nodes and run commands using the ZooKeeper CLI client.

# use BOSH to ssh into the node
bosh -d zookeeper ssh zookeeper/0

# now inside first zookeeper node
# change directory and make JAVA_HOME available
cd /var/vcap/packages/zookeeper
export JAVA_HOME=/var/vcap/packages/openjdk-8/jre

# run CLI client
bin/zkCli.sh

# issue the following ZooKeeper CLI commands
ls /
create /zk_test my_data
ls /
get /zk_test
delete /zk_test
ls /
quit

# exit bosh ssh
exit

Additionally, you should be able to hit the ZooKeeper port 2181 from your host network using the custom route setup earlier.

# get list of all ZooKeeper IP
bosh -d zookeeper vms

# test route to ZooKeeper port 2181
bosh -d zookeeper vms | awk '{print $4}' | xargs -t -i sh -c "echo stat | nc {} 2181"

BOSH Lite state through host reboot

If the BOSH Lite guest VM is rebooted, or the host server is shutdown/restarted/sleeps, the system won’t persist your state and you will have to destroy the environment and then run ‘create-env’ again.

To avoid this problem, save the state of the VM before a host shutdown.

vboxmanage controlvm $(bosh int state.json --path /current_vm_cid) savestate

To start the BOSH Lite VM again:

vboxmanage start $(bosh int state.json --path /current_vm_cid) --type headless

Credit to Cristopher Banck for these commands.

Deleting the resources

If you no longer want the ZooKeeper deployment:

# same command, swap 'create-env' for 'delete-env'
bosh -d zookeeper delete-deployment

# should not show ZooKeeper anymore
bosh deployments

# delete stemcell ZooKeeper required
bosh stemcells
bosh delete-stemcell bosh delete-stemcell bosh-warden-boshlite-ubuntu-trusty-go_agent/3586.60

# should not show stemcell anymore
bosh stemcells

And to delete the BOSH Lite VM completely from VirtualBox:

# same command, swap 'create-env' for 'delete-env'
./do-bosh.sh delete-env

Cloud Foundry

If your ultimate goal is to deploy Cloud Foundry on top of BOSH, then continue to my article here.

 

 

REFERENCES

Differences between CLI v1 and v2

https://bosh.io/docs/bosh-lite.html (recommended way of creating director)

Note that there is an older method of using a Vagrantfile to create a single BOSH Lite Director, but its usage no longer recommended.

https://bosh.io/docs/cli-v2.html#install (install CLI v2)

https://bosh.cloudfoundry.org/docs/bosh-lite.html (bosh alias-env usage)

https://github.com/cloudfoundry/bosh-deployment/blob/master/docs/bosh-lite-on-vbox.md

http://www.starkandwayne.com/blog/bosh-lite-on-virtualbox-with-bosh2/

https://cloud.gov/docs/ops/creating-a-local-dev-environment-in-Virtual-Box/

https://medium.com/@ravijagannathan/install-cloud-foundry-on-bosh-lite-6d3b9a1e416a

https://mariash.github.io/learn-bosh/ (tutorial)

http://operator-workshop.cloudfoundry.org/ (BOSH and CF tutorial)

http://www.engineerbetter.com/blog/bosh-concourse2/ (BOSH and CI Concourse)

https://medium.com/concourse-ci/designing-a-dashboard-for-concourse-fe2e03248751 (concourse dashboard)

https://content.pivotal.io/blog/how-to-create-a-bosh-release-of-a-dns-server (example how to create BOSH release of DNS server)

https://github.com/pivotal-cf-experimental/dummy-boshrelease (dummy BOSH release, simple)

https://github.com/georgethebeatle/simple-bosh-release (simple bosh release apache)

https://blog.ik.am/entries/399 (simplest BOSH release)

https://ultimateguidetobosh.com/releases/ (explain BOSH releases)

https://github.com/starkandwayne/genesis (v2 framework for yaml)

http://engineering.pivotal.io/post/bosh-customize-stemcell/ (modifying stemcell)

https://ultimateguidetobosh.com/tutorials/bosh-lite-virtualbox/ (ssh into BOSH with jumpbox user)

https://ultimateguidetobosh.com/targeting-bosh-envs/ (env vars that allow you to shortcut bosh commands)

https://github.com/cloudfoundry/diego-ssh (ssh into containers manually)

https://github.com/cloudfoundry/cf-release

Zookeeper commands that can be sent via nc: stat,mntr,isro

 

NOTES

In case VirtualBox VM shuts down or reboots, you will have to re-run create-env command from above. (You will have to remove current_manifest_sha line from state.json to force a redeploy.) The containers will be lost after a VM restart, but you can restore your deployment with bosh cck command. Alternatively Pause the VM from the VirtualBox UI before shutting down VirtualBox host, or making your computer sleep

While iterating on a release it’s common to run this sequence:

bosh create-release –force && bosh -e vbox upload-release && bosh -e vbox -d my-dep deploy manifest.yml

delete route to BOSH Lite

sudo route delete -net 10.244.0.0 netmask 255.255.0.0 metric 0

If you have issue tearing down the environment, or delete the BOSH Lite VM so that delete-env cannot be run, then you need to delete state.json.

Common BOSH env vars you may want to export

export BOSH_ENVIRONMENT=https://192.168.50.4:25555
#export BOSH_DEPLOYMENT=cf
export BOSH_CA_CERT=”$(bosh int creds.yml –path /director_ssl/ca)”
export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET=`bosh int ./creds.yml –path /admin_password`

If BOSH Lite VM rebuilt and new jumpbox.key created

ssh-keygen -f “~/.ssh/known_hosts” -R 192.168.50.4