Ansible: Installing Ansible on Ubuntu 16.04

Ansible is an agentless configuration management tool that helps operations teams manage installation, patching, and command execution across a set of servers.

In this article I’ll describe how to deploy the latest release of Ansible using pip on Ubuntu 16.04, and then perform a quick validation against a client.

Installation

Although we could use apt-get to install Ansible from the Ubuntu repository, that would give us an older version.  So instead we will use the pypi repository, which will give us the latest release:

# Ubuntu 16.04 comes with Python3
$ python3 --version

# get python 2.x
$ sudo apt-get update
$ sudo apt-get install python-minimal -y
$ python --version
Python 2.7.12

# packages you need
$ sudo apt-get install software-properties-common python python-pip -y 
$ sudo apt-get install sshpass -y

# packages that you will want to have
$ sudo apt-get install apt-transport-https ca-certificates -y
$ sudo apt-get install python-dev libffi-dev libssl-dev -y 

# prereq
$ sudo -H pip install pip --upgrade
$ sudo -H pip install setuptools --upgrade
$ sudo -H pip install pyopenssl ndg-httpsclient pyasn1

# install Ansible
$ sudo -H pip install ansible

# smoke test
$ /usr/local/bin/ansible --version
ansible 2.3.1.0
 config file = 
 configured module search path = Default w/o overrides
 python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

Configuration

Ansible doesn’t require root privileges, there is no daemon or backing database.  Running it as a normal user in a Python virtualenv would be perfectly fine.  But, in order to keep a set of global defaults, let’s put boilerplate Ansible configuration files in the standard location.

# default location
$ sudo mkdir -p /etc/ansible

# inventory file
$ echo -e "[local]\n127.0.0.1" | sudo tee -a /etc/ansible/hosts

# global configuration
$ sudo wget -O /etc/ansible/ansible.cfg https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg

Now let’s go back and create a local Ansible workspace for our user.

# local ansible work directory
$ mkdir ~/ansible
$ mkdir ~/ansible/files
$ mkdir ~/ansible/playbooks
$ mkdir ~/ansible/templates
$ mkdir ~/ansible/group_vars

# local inventory file
$ echo -e "[local]\n127.0.0.1" >>  ~/ansible/hosts

# copy over global config we can customize
$ cp /etc/ansible/ansible.cfg ~/ansible/ansible.cfg

# set ANSIBLE_CONFIG for local development
$ echo ANSIBLE_CONFIG=~/ansible/ansible.cfg >> ~/.profile
$ source ~/.profile

Then make a few modifications to our custom config at “~/ansible/ansible.cfg”:

inventory = ~/ansible/hosts
log_path = ~/ansible/ansible.log
private_key_file = ~/.ssh/id_rsa

Now an invocation of Ansible should provide a banner message reflecting our local config file and without any error messages:

$ ansible --version

ansible 2.3.1.0
 config file = /home/ubuntu/ansible/ansible.cfg
 configured module search path = Default w/o overrides
 python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]

Quick validation of localhost

Our first test of Ansible connectivity will be to localhost 127.0.0.1, the host where Ansible is installed.  Because I am using the ubuntu/xenial64 Vagrant box, the standard user login is “ubuntu”.  Tailor the credentials to your own host.

# ensure ssh server running on box
$ sudo service ssh status
$ netstat -an | grep 22 | grep "LISTEN "

# avoids ssh prompt or disable of of host_key_checking in config
$ ssh-keyscan 127.0.0.1 >> ~/.ssh/known_hosts

# use Ansible to ping all hosts in inventory
# connect as 'ubuntu' user, -k is 'ask for password'
$ ansible all -m ping -u ubuntu -k

SSH password: 
127.0.0.1 | SUCCESS => {
 "changed": false, 
 "ping": "pong"
}

# use shell module to check free memory on localhost
$ ansible 127.0.0.1 -m shell -a "free -m" -u ubuntu -k

# use setup module to show facts about localhost
$ ansible 127.0.0.1 -m setup -u ubuntu -k

# use shell module to show bound ports on localhost
# ansible 127.0.0.1 -m shell -a "netstat -an | grep 'LISTEN '" -u ubuntu -k

SSH Key-Based Authentication

In order to have an agentless connection between host and agent, standard ssh is leveraged as shown in the section above.  But instead of using passwords for authentication, Ansible prefers a set of public/private ssh keys.  Generating this pair is as easy as running ssh-keygen:

# accept all defaults, just press <ENTER>
$ ssh-keygen

This will create a private/public key pair in the directory “~/.ssh“.

The private key on the ssh client side (Ansible side) is found in the file ‘id_rsa’.  And the contents of the public key file “id_rsa.pub” is what should be appended to the “~/.ssh/authorized_keys” file on each target host you want access to.

This seeding of the public key on each target host is typically a provisioning issue and happens differently if you are using Vagrant locally, DigitalOcean, or Amazon EC2.

But the methodology we will use for this example is that we will first run an Ansible playbook that copies the public key to the target host by using a username/password, thus bootstrapping the target host.  And then any subsequent execution is free to leverage the key based authentication.

Create a playbook file at “~/ansible/playbooks/bootstrap-ubuntu.yaml” that uses the authorized_key module to add the public key content:

---
- hosts: all

  tasks:
 
  - name: Add SSH public key to user remote
    authorized_key:
      user=ubuntu
      key="{{ lookup('file', "../files/workstation.pub") }}"

Then we will copy the public key into our Ansible files directory, and invoke the playbook:

# copy id_rsa.pub to Ansible accessible 'files' directory
$ cp ~/.ssh/id_rsa.pub ~/ansible/files/workstation.pub

# invoke Ansible playbook to populate target host with key
$ ansible-playbook playbooks/bootstrap-ubuntu.yaml -u ubuntu -k
SSH password: 

PLAY [all] ***********************************************************************

TASK [Gathering Facts] ***********************************************************
ok: [127.0.0.1]

TASK [Add SSH public key to user remote] *****************************************
changed: [127.0.0.1]

PLAY RECAP ***********************************************************************
127.0.0.1                  : ok=2    changed=1    unreachable=0    failed=0 

Now that we have seeded the target host with the public key, we can now use key based authentication to run any Ansible commands and do not need to specify a username or interactively enter a password.

$ ansible 127.0.0.1 -m ping
127.0.0.1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

$ ansible 127.0.0.1 -m shell -a "free -m"

Validation of remote host

Following the same ssh bootstrapping process described in the above section, we will now have Ansible connect to a remote Ubuntu 14.04 host.   First, add the remote host to the inventory file, “~/ansible/hosts” as shown below:

[local]
127.0.0.1

[remote]
trusty1 ansible_host=192.168.2.65 ansible_user=vagrant

“remote” is just an arbitrary group name we are assigning, and we have one remote Ubuntu 14.04 host that is named “trusty1”.  If this is not a resolvable DNS name on the Ansible host, we can specify the IP address explicitly as shown above.  And we explicitly set the user that we want to login as.

Note that the local Ansible host is a Vagrant box based on Ubuntu 16.04 whose standard user is “ubuntu”, but the remote host we are going to connect to is an ubuntu/trusty64 Vagrant box based on Ubuntu 14.04  whose standard user is “vagrant”.

Therefore, the playbook we used in the last section was to establish ssh credentials for a user named “ubuntu” but now the standard user on our target remote host is “vagrant”.  So we will need a new playbook.

Create a new playbook file at “~/ansible/playbooks/bootstrap-vagrant.yaml”.  Notice that the “user=” has changed to reflect we want to seed the “vagrant” user with the public ssh key.

---
- hosts: all

  tasks:
 
  - name: Add SSH public key to user remote
    authorized_key:
      user=vagrant
      key="{{ lookup('file', "../files/workstation.pub") }}"

Then, we want to invoke the bootstrapping playbook to get the ssh key copied to the target host.  We do not need to specify the “-u vagrant” to ansible-playbook because we have already set the “ansible_user=vagrant” back in the inventory file.

# avoid ssh prompt or disable of host_key_checking in config
$ ssh-keyscan trusty1 >> ~/.ssh/known_hosts

# connect as 'vagrant' user
$ ansible-playbook --limit remote playbooks/bootstrap-vagrant.yaml -k

SSH password: 

PLAY [all] ***********************************************************************

TASK [Gathering Facts] ***********************************************************
ok: [trusty1]

TASK [Add SSH public key to user remote] *****************************************
changed: [trusty1]

PLAY RECAP ***********************************************************************
trusty1                    : ok=2    changed=1    unreachable=0    failed=0

Finally, verify that we can run a couple of arbitrary commands against this target host using key-based authentication:

$ ansible trusty1 -m ping
trusty1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

# run command
$ ansible trusty1 -m shell -a "free -m"

# show facts
$ ansible trusty1 -m setup

 

 

 

REFERENCES

https://askubuntu.com/questions/832137/ubuntu-xenial64-box-password
https://bugs.launchpad.net/cloud-images/+bug/1569237 (password for ubuntu/xenial64)
vi ~/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-xenial64/20170519.0.0/virtualbox/Vagrantfile