Bash: cloning the ownership and permissions of another file using reference

If you need to create a file that has the exact same ownership and permission bits as an existing file, the ‘reference’ flag provides a convenient shortcut. For example, if you had a file named ‘myoriginal’ that had the exact ownership and permissions required for a new file ‘mynewfile’, you could use the commands below Bash: cloning the ownership and permissions of another file using reference

Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

microk8s is a lightweight Kubernetes deployment by Canonical that is enterprise-grade, yet also compact enough to run on development boxes and edge devices. In this article, I will show you how to deploy a  three-node microk8s cluster on Ubuntu nodes that are created using Terraform and a local KVM libvirt provider. This article focuses on Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

KVM: installing Terraform and the libvirt provider for local KVM resources

Terraform is a popular tool for provisioning infrastructure on cloud providers such as EC2 and Azure, but there is also a provider written for local KVM libvirt resources. Using the libvirt provider, we can use standard Terraform constructs to create local VMs, networks, and disks.  And unlike older versions of this provider, the plugin binary KVM: installing Terraform and the libvirt provider for local KVM resources

Ansible: Ubuntu alternatives using the community.general collection

In a previous article, I showed how to manually setup Alternatives so that different versions of a binary could co-exist on a target machine. In that step-by-step example, we used the Terraform binary as an example, and placed two independent versions in /usr/local/bin, and then set the priority so that terraform14 was preferred. To do Ansible: Ubuntu alternatives using the community.general collection

Git: cloning a git repository from one location to another

Most Git providers-as-a-service have administrative functions for renaming, moving, and even importing repositories from other provider URLs. However, it is also valid to perform these operations manually by repointing the origin and then pushing all commits and tags to a new repository URL. # make sure all changes are pushed first git push # check Git: cloning a git repository from one location to another

Ansible: implementing a looping block using include_tasks

Ansible blocks provide a convenient way to logically group tasks.  So it is unfortunate that native Ansible syntax does not allow looping to be combined with a block.  Consider the simple conditional block below controlled by a variable ‘do_block_logic’: – name: simple block with conditional block: – name: simple block task1 debug: msg=”hello” – name: Ansible: implementing a looping block using include_tasks

Bash: using printf to display fixed-width padded string

One way to implement character padding in Bash is to use printf and substring extraction.  This can be especially useful in reports or menu display. Given a $padding variable that contains the maximum length of characters, you can subtract out the length of a display string like below. # length of maximum padding padding=”………………………………..” printf Bash: using printf to display fixed-width padded string

Ansible: unzipping an encrypted file using the unarchive module

If you need to expand an encrypted zip file using the Ansible unarchive module, then you will need to provide the password using the ‘extra_opts’ parameter. Per below, make sure you place the “-P” flag as an independent argument to the password. – name: unzip encrypted zip unarchive: src: mysource.zip dest: /remote/path extra_opts: – “-P” Ansible: unzipping an encrypted file using the unarchive module

Docker: building an ntp server image with Alpine and chrony

If you need a lightweight NTP server, an Alpine based container image with a chrony daemon takes up minimal runtime resources and is about 8Mb in size. I have pushed ‘fabianlee/docker-chrony-alpine‘ to docker hub.  The run command requires that you specify linux capabilities and a volume for the chrony.conf file, so the easiest way to Docker: building an ntp server image with Alpine and chrony

Ansible: installing the latest Ansible on Ubuntu

Update Sep 2023: Installing ansible-core at user level (not system) with pip Ansible is an agentless configuration management tool that helps operations teams manage installation, patching, and command execution across a set of servers. In this article I’ll describe how to install the latest release of Ansible.

Terraform: provisioning GCP servers in both public and private subnets

It is relatively straightforward to create a GCP public subnet where the compute instances have access to the public internet via the default internet gateway. But once you start building private subnets behind it, you must start considering firewall, routing, and the NAT gateways required to reach public services. In this article, I will use Terraform: provisioning GCP servers in both public and private subnets

Terraform: provisioning AWS servers in both public and private subnets

It is relatively straightforward to create an AWS public subnet where the compute instances have access to the public internet via the default internet gateway. But once you start building private subnets behind it, you must start considering security groups, routing, and the NAT gateways required to reach public services. In this article, I will Terraform: provisioning AWS servers in both public and private subnets

Terraform: provisioning an RDP enabled Windows server in Azure

The ‘azurerm‘ Terraform provider allows you to build a Windows server in Microsoft’s Azure hyperscaler. However, in order to use this provisioner, you must first install the Azure CLI. And in line with automation best practices we will use a Service Account (Principal) to create the networks, security rules, and compute instances. When complete, you’ll Terraform: provisioning an RDP enabled Windows server in Azure

Terraform: installing Terraform manually on Ubuntu

Terraform is a popular tool for provisioning infrastructure on cloud providers such as EC2, Azure, and GCP.    If you want to install Teraform on Ubuntu using apt-get, follow HashiCorp’s standard installation document. However, I find that I often need multiple versions for different projects.  Find your desired version of the binaries at the Terraform download Terraform: installing Terraform manually on Ubuntu

Ansible: orchestrating ssh access through a bastion host

Ansible uses ssh to configure its target host inventory, but for on-premise datacenters as well as hyperscalers like EC2/GCP/Azure, the target hosts are often purposely located in deeper private subnets that cannot be reached from the Ansible orchestrator host. One solution is to enable a bastion/jumpbox host that serves as the forwarding host.  It sits Ansible: orchestrating ssh access through a bastion host

Azure: installing the Azure CLI on Ubuntu

Managing resources in Azure from the command line can be done natively from Ubuntu using the Azure CLI.  First, add the prerequisite packages. sudo apt-get update sudo apt-get install ca-certificates curl apt-transport-https lsb-release gnupg -y Then install the Microsoft signing key and add the custom repository. curl -sL https://packages.microsoft.com/keys/microsoft.asc | gpg –dearmor | sudo tee Azure: installing the Azure CLI on Ubuntu

Terraform: invoking a startup script for an EC2 aws_instance

You can bake a startup script directly into the creation of your EC2 instance when using Terraform.  Although complex post-configuration should be left to tools such as Ansible, essential bootstrap type commands or custom routes for instances in private subnets are reasons why you might need to use this hook. Below is an example of Terraform: invoking a startup script for an EC2 aws_instance

Terraform: using update-alternatives to manage multiple terraform binaries

If you have multiple terraform projects, it can be necessary to support multiple versions of the terraform binary to match module and provider dependencies. Instead of creating a custom solution of binary copies and links, this can be done using the Alternatives concept which handles these symbolic links in a standard way using links in Terraform: using update-alternatives to manage multiple terraform binaries

Ansible: installing linux-headers matching kernel for Ubuntu

For Ubuntu, there are a couple of ways you can install the linux-headers package matching the kernel version.  You can either explicitly specify the version, or use the meta package as shown below. # specify kernel version using subshell sudo apt-get install -y linux-headers-$(uname -r) # OR meta package that auto-matches kernel sudo apt-get install Ansible: installing linux-headers matching kernel for Ubuntu

Kubernetes: Using Downward API metadata from a GoLang application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized GoLang based application.

Kubernetes: Using Downward API metadata from a Python application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized Python based application.

Kubernetes: using the Downward API to access pod/container metadata

The Kubernetes Downward API allows a pod to get access to metadata about itself and the cluster without creating a tight coupling to the Kubernetes API.  For example, information such as pod name, labels, annotations, IP address, node, and cpu/memory limits can be made available inside the pod. In this article, I’ll show how to Kubernetes: using the Downward API to access pod/container metadata

GCP: pulling an image from the Container Registry of another project

In a previous article I discussed the advantages to keeping container images in the private Google Container Registry of a project.  And if you have a GKE cluster in the exact same project, then image pulls happen seamlessly without any additional configuration required. However, if the GKE cluster is in a different project than the GCP: pulling an image from the Container Registry of another project

GCP: pushing GKE images into gcr.io to avoid pull rate limits

Docker hub now enforces pull rate limits (since November 2020).  And unfortunately, this limit is often reached at critical moments such as upgrades or infrastructure events when bulk pod recreation is happening. One way to avoid this problem is to place your images into an alternate image registry.  This could mean a lot of work GCP: pushing GKE images into gcr.io to avoid pull rate limits

Kubernetes: detecting the installed version of nginx ingress

If you need to determine the version of the nginx ingress controller deployed, then you can invoke the ingress controller binary with the ‘–version’ flag. But this binary is located in the ingress-nginx-controller pod, so do a ‘kubectl exec’ like below. # show all running nginx ingress pods kubectl get pods -n $ingress_ns -l app.kubernetes.io/name=ingress-nginx Kubernetes: detecting the installed version of nginx ingress

Kubernetes: testing pod communication directly from istio sidecar proxy

Once you introduce an istio sidecar proxy into your deployment, it becomes another point at which you might need to troubleshoot network connectivity to the primary container. Assuming you have deployed a pod with an app label “helloworld” in the default namespace listening on port 5000, you can use a command like the following to Kubernetes: testing pod communication directly from istio sidecar proxy