Ansible: orchestrating ssh access through a bastion host

Ansible uses ssh to configure its target host inventory, but for on-premise datacenters as well as hyperscalers like EC2/GCP/Azure, the target hosts are often purposely located in deeper private subnets that cannot be reached from the Ansible orchestrator host. One solution is to enable a bastion/jumpbox host that serves as the forwarding host.  It sits Ansible: orchestrating ssh access through a bastion host

Azure: installing the Azure CLI on Ubuntu

Managing resources in Azure from the command line can be done natively from Ubuntu using the Azure CLI.  First, add the prerequisite packages. sudo apt-get update sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg -y Then install the Microsoft signing key and add the custom repository. curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg –dearmor | sudo tee Azure: installing the Azure CLI on Ubuntu

Terraform: invoking a startup script for an EC2 aws_instance

You can bake a startup script directly into the creation of your EC2 instance when using Terraform.  Although complex post-configuration should be left to tools such as Ansible, essential bootstrap type commands or custom routes for instances in private subnets are reasons why you might need to use this hook. Below is an example of Terraform: invoking a startup script for an EC2 aws_instance

Terraform: using update-alternatives to manage multiple terraform binaries

If you have multiple terraform projects, it can be necessary to support multiple versions of the terraform binary to match module and provider dependencies. Instead of creating a custom solution of binary copies and links, this can be done using the Alternatives concept which handles these symbolic links in a standard way using links in Terraform: using update-alternatives to manage multiple terraform binaries

Ansible: installing linux-headers matching kernel for Ubuntu

For Ubuntu, there are a couple of ways you can install the linux-headers package matching the kernel version.  You can either explicitly specify the version, or use the meta package as shown below. # specify kernel version using subshell sudo apt-get install -y linux-headers-$(uname -r) # OR meta package that auto-matches kernel sudo apt-get install Ansible: installing linux-headers matching kernel for Ubuntu

Kubernetes: Using Downward API metadata from a GoLang application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized GoLang based application.

Kubernetes: Using Downward API metadata from a Python application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized Python based application.

Kubernetes: using the Downward API to access pod/container metadata

The Kubernetes Downward API allows a pod to get access to metadata about itself and the cluster without creating a tight coupling to the Kubernetes API.  For example, information such as pod name, labels, annotations, IP address, node, and cpu/memory limits can be made available inside the pod. In this article, I’ll show how to Kubernetes: using the Downward API to access pod/container metadata

GCP: pulling an image from the Container Registry of another project

In a previous article I discussed the advantages to keeping container images in the private Google Container Registry of a project.  And if you have a GKE cluster in the exact same project, then image pulls happen seamlessly without any additional configuration required. However, if the GKE cluster is in a different project than the GCP: pulling an image from the Container Registry of another project

GCP: pushing GKE images into gcr.io to avoid pull rate limits

Docker hub now enforces pull rate limits (since November 2020).  And unfortunately, this limit is often reached at critical moments such as upgrades or infrastructure events when bulk pod recreation is happening. One way to avoid this problem is to place your images into an alternate image registry.  This could mean a lot of work GCP: pushing GKE images into gcr.io to avoid pull rate limits

Kubernetes: detecting the installed version of nginx ingress

If you need to determine the version of the nginx ingress controller deployed, then you can invoke the ingress controller binary with the ‘–version’ flag. But this binary is located in the ingress-nginx-controller pod, so do a ‘kubectl exec’ like below. # namespace of your nginx ingress ingress_ns=”default” # find running pod podname=$(kubectl get pods Kubernetes: detecting the installed version of nginx ingress

Kubernetes: testing pod communication directly from istio sidecar proxy

Once you introduce an istio sidecar proxy into your deployment, it becomes another point at which you might need to troubleshoot network connectivity to the primary container. Assuming you have deployed a pod with an app label “helloworld” in the default namespace listening on port 5000, you can use a command like the following to Kubernetes: testing pod communication directly from istio sidecar proxy

Kubernetes: istio Gateway in a different namespace than VirtualService

If your istio ingress Gateway is in a different namespace than your VirtualService, then you need to make sure you prefix the gateway reference with that namespace. For example, if your istio ingress Gateway is in the ‘default’ namespace, yet your Deployment, Service, and VirtualService are in the namespace ‘helloworld’. apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: Kubernetes: istio Gateway in a different namespace than VirtualService

Docker: determining container responsible for largest overlay directories

Whether you are running a docker daemon on a development host or a GKE worker node using Docker as the container engine, it is important to understand the amount of disk storage being utilized by the containers. If you navigate into the ‘/var/lib/docker/overlay2’ directory, you will  see cryptic hashed ids representing the containers layers instead Docker: determining container responsible for largest overlay directories

Ansible: pulling values from nested dictionaries when path might not exist

It is typically straightforward to resolve Ansible errors where a simple variable used within an action is not defined.  However, when dealing with nested dictionary variables in your Ansible tasks, it can become more complex because not only can the leaf be missing, but also any of the parent container paths. For example, the nested Ansible: pulling values from nested dictionaries when path might not exist

Ansible: action only executed if tag set, avoiding ‘all’ behavior

Even if the actions in your playbook/role are tagged to separate their logic, this ability to selectively execute will not operate properly when called without any tags because then you will fallback to the special ‘all‘ behavior. Consider a playbook with the following actions. tasks: – debug: msg=”when tag ‘run'” tags: run – debug: msg=”when Ansible: action only executed if tag set, avoiding ‘all’ behavior

Ansible: creating SAN certificates with a custom root CA

Ansible has support for generating self-signed certificates as well as certificates using a custom root CA (certificate authority).  This is possible using the community.crypto collection. I’ve put this into a role named ansible-role-cert-with-ca available on github, and it can be used from a playbook like below: vars: # custom CA, leaving undefined will create self-signed Ansible: creating SAN certificates with a custom root CA

Ubuntu: loading a key into ssh-agent at login with a user-level systemd service

By default, if an SSH key file is dropped into your personal ‘~/.ssh’ directory that matches a set of standard names, then it will automatically be used as an identity when logging into a remote site (id_rsa, id_dsa, id_ecsda, id_ed25519, or identity). For example, this makes it simple to comply with Github’s requirement to use Ubuntu: loading a key into ssh-agent at login with a user-level systemd service

Bash: performing multiple substitutions with a single sed invocation

Instead of stringing together sed multiple times in a pipeline, it is also possible to make multiple substitutions with a single invocation of sed. Consider the following example which replaces the word ‘hello’ as well as ‘quick’ in the paragraph: $ sed “s/hello/goodbye/g; s/quick/slow/g” <<EOF hello, world! hello, universe! the quick brown fox EOF goodbye, Bash: performing multiple substitutions with a single sed invocation

Ansible: generating content for all template files using with_fileglob

There are scenarios where instead of being explicit about each file name when generating templates from a role, you may want to take all the suffixed files in the ‘templates’ directory and dump their generated content into a destination directory. As a quick example, if you are in a role and want every script in Ansible: generating content for all template files using with_fileglob

Kubernetes: Updating an existing ConfigMap using kubectl replace

Creating a ConfigMap using ‘kubectl create configmap’ is a straightforward operation.   However, there is not a corresponding ‘kubectl apply’ that can easily update that ConfigMap. As an example, here are the commands for the creation of a simple ConfigMap using a file named “ConfigMap-test1.yaml“. $ cat ConfigMap-test1.yaml test1: foo: bar # create and then show Kubernetes: Updating an existing ConfigMap using kubectl replace

Python: Setting the preferred Python version on Bionic 18 and Focal 20

Even though Python2 reached EOL in January 2020, there are still lagging applications and modules that force its continued use on certain systems.  If that is your situation, you need the ability to force preference to Python3 unless a program explicitly invokes Python2. If you are using Ubuntu Focal 20, then you have an easy Python: Setting the preferred Python version on Bionic 18 and Focal 20

Ubuntu: using ldapsearch to query against a secure Windows Domain Controller

Using ldapsearch to query against the insecure port of a Windows Domain Controller is straightforward.  However, it can be challenging to get all the pieces in place for a production environment where the secure port must be used and the root CA certificate is typically not from a public CA. Assuming the standard insecure port Ubuntu: using ldapsearch to query against a secure Windows Domain Controller

Bash: using multiple values from an input pipeline to construct and execute a command

If you have multiple values coming through the Bash input pipeline, it can be difficult to process these into a complex, formatted set of arguments unless you use intermediate temporary files. But one way to neatly put together a complex set of arguments from an input pipeline with multiple values is to use awk printf Bash: using multiple values from an input pipeline to construct and execute a command

Python: exploring the use of startswith against a list: tuple, regex, list comprehension, lambda

As with any programming language, Python has a multitude of ways to accomplish the same task.  In this article, I will explore the idea of taking a string and checking if it ‘startswith’ any of the strings from a predetermined list. As context for the example code, RFC1918 sets out several IPv4 ranges that can Python: exploring the use of startswith against a list: tuple, regex, list comprehension, lambda

Ubuntu: Extending capacity of an LVM volume group using an existing or new disk

It is common for a virtualized Guest OS base image to have a generic storage capacity.  This capacity can easily be exceeded by production scenarios, performance testing, logging, or even the general cruft of running a machine 24×7. If your VM is using Logical Volume Management (LVM), adding space can be done by following a Ubuntu: Extending capacity of an LVM volume group using an existing or new disk