Fabian

Helm: Installing Helm on Ubuntu

Update Aug 2023: using newer ‘signed-by’ attribute for apt signing keys. Installing Helm using apt is a straight-forward procedure and documented on the official site.  Coming straight from the official helm documentation, here are the commands for Ubuntu 22. curl https://baltocdn.com/helm/signing.asc | gpg –dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo chmod 644 /usr/share/keyrings/helm.gpg sudo Helm: Installing Helm on Ubuntu

Kubernetes: running a mail container for testing email during development

If you are in the development lifecycle and need to quickly test email functionality, you can deploy the codecentric/mailhog image directly within Kubernetes.  It will receive all email regardless of address, and from its web interface show you all the email that has been received. In this article, I will show you how to deploy Kubernetes: running a mail container for testing email during development

Kubernetes: NFS mount using dynamic volume and Storage Class

If you have an external NFS export and want to share that with a pod/deployment, you can leverage the nfs-subdir-external-provisioner  to create a storageclass that can dynamically create the persistent volume. In contrast to manually creating the persistent volume and persistent volume claim, this dynamic method cedes the lifecycle of the persistent volume over to Kubernetes: NFS mount using dynamic volume and Storage Class

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.9 to 1.10

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from 1.9 to 1.10 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.9 installation article.

Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Ubuntu: Running a bash script periodically with a user-level Systemd timer

If you have a Bash script that needs to run periodically, you can run it using a crontab entry.  But you can also have it invoked by Systemd using systemd.timer. Furthermore, you can run Systemd services as  user-level services instead of the typical system-level service for even further isolation. Running via Systemd provides more powerful Ubuntu: Running a bash script periodically with a user-level Systemd timer

Ubuntu: Running a bash script periodically with a system-level Systemd timer

If you have a Bash script that needs to run periodically, you can run it using a crontab entry or file.  But you can also have it invoked from Systemd using systemd.timer. Running via Systemd provides more powerful constructs for invocation, configuration, monitoring, and logging.  In this article, I will show how to periodically run Ubuntu: Running a bash script periodically with a system-level Systemd timer

Python: constructing a DataFrame from a relational database with pandas

If the dataset you want to analyze with pandas is coming from a normalized relational database, then you can use ‘pandas.read_sql‘ to pull the data directly. In this article, we will deploy a small MariaDB instance with Docker and show how we can create DataFrame directly from a single table or from a join between Python: constructing a DataFrame from a relational database with pandas

Docker: Installing Docker CE on Ubuntu focal 20.04

Docker is a container platform that streamlines software delivery and provides isolation, scalability, and efficiency with less overhead than OS level virtualization. These instructions are taken directly from the official Docker for Ubuntu page, but I wanted to reiterate those tasks essential for installing the Docker Community Edition on Ubuntu focal 20.04. If you want Docker: Installing Docker CE on Ubuntu focal 20.04

Kubernetes: minor version upgrade of Anthos GKE on-prem 1.9

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to perform a minor-version upgrade from 1.9.1 to 1.9.2 on VMware. I will be using the same environment and config files described in my Anthos 1.9 installation article.

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.8 to 1.9

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters.  In this article, I will be following the steps required to upgrade from 1.8 to 1.9 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.8 installation article.

Kubernetes: Anthos GKE on-prem 1.9 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.9 on nested VMware environment

Kubernetes: Anthos GKE on-prem 1.8 on nested VMware environment

Update Dec 2021: I have written an updated version of this article for vCenter 7.0U1 and Anthos 1.9. Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a Kubernetes: Anthos GKE on-prem 1.8 on nested VMware environment

Python: printing in color using ANSI color codes

Although there are Python modules [1,2,3] specially suited for displaying text to the console in color, if you want a quick no-dependency method then you can use ANSI color codes directly. Here is an example of printing a line in green, then red. print(“\033[0;32mOK this is green\033[00m”) print(“\033[0;31mERROR this is red\033[00m”) Additional color codes can Python: printing in color using ANSI color codes

Python: find the most recently modified file matching a pattern

Whether it is the most recent log file, image, or report –  sometimes you will need to find the most recently modified file in a directory.  The example below finds the latest file in the “/tmp” directory. import os import glob # get list of files that matches pattern pattern=”/tmp/*” files = list(filter(os.path.isfile, glob.glob(pattern))) # Python: find the most recently modified file matching a pattern

Bash: deleting a file with special characters using its inode value

If you have a file with special characters (single quotes, wildcard, etc) in the name, it can be difficult to discover the exact escape sequence to correctly delete.  To avoid playing with escape characters, you can simply use the inode number of the file instead. For example, let’s say you accidentally specify tar options incorrectly Bash: deleting a file with special characters using its inode value

Python: converting JSON to dot notation for easier path determination

Most of the modern cloud platforms and utilities have us manipulate either JSON or YAML configuration files.  And when you start dealing with real world scenarios with hundreds of lines of embedded data structures it is too difficult and error-prone to manually inspect indentation levels to determine the exact dotted or json path to an Python: converting JSON to dot notation for easier path determination

Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Managing certificates is one of the most mundane, yet critical chores in the maintenance of environments.   However, this manual maintenance can be off-loaded to cert-manager on Kubernetes. In this article, we will use cert-manager to generate TLS certs for a public NGINX ingress using Let’s Encrypt.   The primary ingress will have two different hosts using Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Updated Aug 2023: tested with Kubernetes 1.25 and ingress-nginx 1.8.1 Creating a Kubernetes cluster on DigitalOcean can be done manually using its web Control Panel, but for automation purposes it is better to use Terraform. In this article, we will use Terraform to create a Kubernetes cluster on DigitalOcean infrastructure. We will then use helm Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Terraform: post-configuration by calling remote-exec script with parameters

If you are creating a VM resource and must run a Bash script as part of the initialization, that can be done within Terraform using the remote-exec provisioner and its ability to execute scripts via ssh. If you need to send arguments to this script, there is a standard pattern described in the official documentation Terraform: post-configuration by calling remote-exec script with parameters

Terraform: using dynamic blocks to add multiple disks on a vsphere_virtual_machine

If the Terraform resource you are creating supports multiple dependent entities (e.g. a single VM with multiple disks or networks), but only by adding hardcoded duplicate text blocks, then you should consider Terraform dynamic blocks. For example, if you are creating a vsphere_virtual_machine with two additional data disks, then here is a snippet showing how Terraform: using dynamic blocks to add multiple disks on a vsphere_virtual_machine

Terraform: using json files as input variables and local variables

Specifying input variables in the “terraform.tfvars” file in HCL syntax is commonly understood.   But if the values you need are already coming from a json source, it might make more sense to feed those directly to Terraform. Here is an example where the simple variable “a” is provided via an external json file. # Terraform: using json files as input variables and local variables

Terraform: converting ordered lists to sets to avoid errors with for_each

If you are using a Terraform “for_each” and get the error message below, it is most likely because you are sending an ordered list instead of an unordered set (which is not supported at the resource level). The given “for_each” argument value is unsuitable: the “for_each” argument must be a map, or set of strings, Terraform: converting ordered lists to sets to avoid errors with for_each

KVM: running qemu-img info without exclusive access using force-share flag

By default, ‘qemu-image info’ will throw an error if it cannot get exclusive access to the disk file it is trying to read. $ sudo qemu-img info mydisk.qcow2 qemu-img: Could not open ‘mydisk.qcow2’: Failed to get shared “write” lock Is another process using the image [mydisk.qcow2]? Although it is not listed in the man page, KVM: running qemu-img info without exclusive access using force-share flag

Istio: Canary upgrade of Operator from Istio 1.8 directly to 1.10

Istio announced it will support upgrades jumping directly from 1.8 to 1.10, instead of forcing an intermediate upgrade through 1.9. In this article, I will show you how to do a canary upgrade from a 1.8 operator to 1.10 operator without affecting end user traffic.  We will incorporate the new 1.10 concept of revision tags Istio: Canary upgrade of Operator from Istio 1.8 directly to 1.10

Istio: Upgrading from Istio 1.7 operator without revision to fully revisioned control plane

Istio 1.7 has the ability to do canary upgrades for revisioned control planes and operators, but if you did your initial installation without the ‘revision’ flag, then you’ll need to apply these settings. In this article, I will show you how to go from an non-revisioned 1.7.5 Istio operator and control plane to a 1.7.5 Istio: Upgrading from Istio 1.7 operator without revision to fully revisioned control plane

Istio: Upgrading from Istio 1.6 operator without revision to 1.7 fully revisioned control plane

Istio has the ability to do canary upgrades for revisioned control planes, but it was only in 1.7 that the Operator itself got  support for the ‘revision’ flag. In this article, I will show you how to go from an non-revisioned 1.6.6 Istio operator and control plane to a 1.7 revisioned operator and control plane Istio: Upgrading from Istio 1.6 operator without revision to 1.7 fully revisioned control plane

Kubernetes: pulling out the ready status of individual containers using kubectl

kubectl will give you a sythesized column showing how many container instances in a pod are READY with the default ‘get pods’ command.  But if you are dealing with json output and need this information, then you can extract it using jsonpath or jq. Here is an example output from ‘get pods’ showing the READY Kubernetes: pulling out the ready status of individual containers using kubectl

Kubernetes: adding and removing pod template annotations using kubectl

Although ‘kubectl annotate‘ will set an annotation on a  object directly, it will not set the annotation on the more deeply nested pod template for a Deployment or Daemonset. If you want to quickly set the annotation on a pod template (.spec.template.metadata.annotations) without modifying the full manifest, you can  use the ‘patch‘ command.  As a Kubernetes: adding and removing pod template annotations using kubectl

Kubernetes: K3s with multiple Istio ingress gateways

By default, K3s uses the Traefik ingress controller and Klipper service load balancer to expose services.  But this can be replaced with a MetalLB load balancer and Istio ingress controller. K3s is perfectly capable of handling Istio operators, gateways, and virtual services if you want the advanced policy, security, and observability offered by Istio. In Kubernetes: K3s with multiple Istio ingress gateways

Kubernetes: K3s with multiple metalLB endpoints and nginx ingress controllers

Updated March 2023: using K3s 1.26 and MetalLB 0.13.9 By default, K3s uses the Traefik ingress controller and Klipper service load balancer to expose services.  But this can be replaced with a MetalLB load balancer and NGINX ingress controller. But a single NGINX ingress controller is sometimes not sufficient.  For example, the primary ingress may Kubernetes: K3s with multiple metalLB endpoints and nginx ingress controllers