Ansible: Installing Ansible on Ubuntu 14.04

Ansible is an agentless configuration management tool that helps operations teams manage installation, patching, and command execution across a set of servers.

In this article I’ll describe how to deploy the latest release of Ansible using pip on Ubuntu 14.04, and then perform a quick validation against a client.


Although we could use apt-get to install Ansible from the Ubuntu repository, that would give us an older version.  So instead we will use the pypi repository, which will give us the latest release:

# need python 2.6 or higher
$ python --version

# packages you need
$ sudo apt-get install software-properties-common python python-pip -y 
$ sudo apt-get install sshpass -y

# packages that you will want to have
$ sudo apt-get install apt-transport-https ca-certificates -y
$ sudo apt-get install python-dev libffi-dev libssl-dev -y 

# prereq
$ sudo pip install setuptools
$ sudo pip install pyopenssl ndg-httpsclient pyasn1

# install Ansible
$ sudo pip install ansible

# smoke test
$ /usr/local/bin/ansible --version
 config file = 
 configured module search path = Default w/o overrides
 python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]

Note that you may see an invalid syntax message during the install for jinja2, “async def auto_to_seq(value)” [1,2].  You can ignore it, Ansible 2.2 introduces support for Python3, but we are only running Python2.x on this host.


Ansible doesn’t require root privileges, there is no daemon or backing database.  Running it as a normal user in a Python virtualenv would be perfectly fine.  But, in order to keep a set of global defaults, let’s put boilerplate Ansible configuration files in the standard location.

# default location
$ sudo mkdir -p /etc/ansible

# inventory file
$ echo -e "[local]\n127.0.0.1" | sudo tee -a /etc/ansible/hosts

# global configuration
$ sudo wget -O /etc/ansible/ansible.cfg

Now let’s go back and create a local Ansible workspace for our user.

# local ansible work directory
$ mkdir ~/ansible
$ mkdir ~/ansible/files
$ mkdir ~/ansible/playbooks
$ mkdir ~/ansible/templates
$ mkdir ~/ansible/group_vars

# local inventory file
$ echo -e "[local]\n127.0.0.1" >>  ~/ansible/hosts

# copy over global config we can customize
$ cp /etc/ansible/ansible.cfg ~/ansible/ansible.cfg

# set ANSIBLE_CONFIG for local development
$ echo ANSIBLE_CONFIG=~/ansible/ansible.cfg >> ~/.profile
$ source ~/.profile

Then make a few modifications to our custom config at “~/ansible/ansible.cfg”:

inventory = ~/ansible/hosts
log_path = ~/ansible/ansible.log
private_key_file = ~/.ssh/id_rsa

Now an invocation of Ansible should provide a banner message reflecting our local config file and without any error messages:

$ ansible --version

 config file = /home/vagrant/ansible/ansible.cfg
 configured module search path = Default w/o overrides
 python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]

Quick validation of localhost

Our first test of Ansible connectivity will be to localhost, the host where Ansible is installed.  Because I am using a Vagrant box, the standard user login is “vagrant”, but you will need to tailor this to your own host.

# ensure ssh server running on box
$ sudo service ssh status
$ netstat -an | grep 22 | grep "LISTEN "

# avoids ssh prompt or disable of of host_key_checking in config
$ ssh-keyscan >> ~/.ssh/known_hosts

# use Ansible to ping all hosts in inventory
# connect as 'vagrant' user, -k is 'ask for password'
$ ansible all -m ping -u vagrant -k

SSH password: | SUCCESS => {
 "changed": false, 
 "ping": "pong"

# use shell module to check free memory on localhost
# ansible -m shell -a "free -m" -u vagrant -k

# use setup module to show facts about localhost
$ ansible -m setup -u vagrant -k

# use shell module to show bound ports on localhost
# ansible -m shell -a "netstat -an | grep 'LISTEN '" -u vagrant -k

SSH Key-Based Authentication

In order to have an agentless connection between host and agent, standard ssh is leveraged as shown in the section above.  But instead of using passwords for authentication, Ansible prefers a set of public/private ssh keys.  Generating this pair is as easy as running ssh-keygen:

# accept all defaults, just press <ENTER>
$ ssh-keygen

This will create a private/public key pair in the directory “~/.ssh“.

The private key on the ssh client side (Ansible side) is found in the file ‘id_rsa’.  And the contents of the public key file “” is what should be appended to the “~/.ssh/authorized_keys” file on each target host you want access to.

This seeding of the public key on each target host is typically a provisioning issue and happens differently if you are using Vagrant locally, DigitalOcean, or Amazon EC2.

But the methodology we will use for this example is that we will first run an Ansible playbook that copies the public key to the target host by using a username/password, thus bootstrapping the target host.  And then any subsequent execution is free to leverage the key based authentication.

Create a playbook file at “~/ansible/playbooks/bootstrap-vagrant.yaml” that uses the authorized_key module to add the public key content:

- hosts: all

  - name: Add SSH public key to user remote
      key="{{ lookup('file', "../files/") }}"

Then we will copy the public key into our Ansible files directory, and invoke the playbook:

# copy to Ansible accessible 'files' directory
$ cp ~/.ssh/ ~/ansible/files/

# invoke Ansible playbook to populate target host with key
$ ansible-playbook playbooks/bootstrap-vagrant.yaml -u vagrant -k
SSH password: 

PLAY [all] ***********************************************************************

TASK [Gathering Facts] ***********************************************************
ok: []

TASK [Add SSH public key to user remote] *****************************************
changed: []

PLAY RECAP ***********************************************************************                  : ok=2    changed=1    unreachable=0    failed=0 

Now that we have seeded the target host with the public key, we can now use key based authentication to run any Ansible commands:

$ ansible -m ping | SUCCESS => {
    "changed": false, 
    "ping": "pong"

$ ansible -m shell -a "free -m"

Validation of remote host

Following the same ssh bootstrapping process described in the above section, we will now have Ansible connect to a remote host.   First, add the remote host to the inventory file, “~/ansible/hosts” as shown below:


trusty1 ansible_host= ansible_user=vagrant

“remote” is just an arbitrary group name we are assigning, and we have one remote Ubuntu host that is named “trusty1”.  If this is not a resolvable DNS name on the Ansible host, we can specify the IP address explicitly as shown above.  And we explicitly set the user that we want to login as.

Then, we want to invoke the bootstrapping playbook to get the ssh key copied to the target host.  We do not need to specify the “-u vagrant” to ansible-playbook because we have already set the “ansible_user=vagrant” back in the inventory file.

# avoid ssh prompt or disable of host_key_checking in config
$ ssh-keyscan trusty1 >> ~/.ssh/known_hosts

$ ansible-playbook --limit remote playbooks/bootstrap-vagrant.yaml -k

SSH password: 

PLAY [all] ***********************************************************************

TASK [Gathering Facts] ***********************************************************
ok: [trusty1]

TASK [Add SSH public key to user remote] *****************************************
changed: [trusty1]

PLAY RECAP ***********************************************************************
trusty1                    : ok=2    changed=1    unreachable=0    failed=0

Finally, verify that we can run a couple of arbitrary commands against this target host using key-based authentication:

$ ansible trusty1 -m ping
trusty1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"

$ ansible trusty1 -m shell -a "free -m"


REFERENCES (playbook for adding authorized_key) (passing vars using group mechanism) (adhoc commands)