Containers

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.9 to 1.10

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from 1.9 to 1.10 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.9 installation article.

Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Docker: Installing Docker CE on Ubuntu focal 20.04

Docker is a container platform that streamlines software delivery and provides isolation, scalability, and efficiency with less overhead than OS level virtualization. These instructions are taken directly from the official Docker for Ubuntu page, but I wanted to reiterate those tasks essential for installing the Docker Community Edition on Ubuntu focal 20.04. If you want Docker: Installing Docker CE on Ubuntu focal 20.04

Kubernetes: minor version upgrade of Anthos GKE on-prem 1.9

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to perform a minor-version upgrade from 1.9.1 to 1.9.2 on VMware. I will be using the same environment and config files described in my Anthos 1.9 installation article.

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.8 to 1.9

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters.  In this article, I will be following the steps required to upgrade from 1.8 to 1.9 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.8 installation article.

Kubernetes: Anthos GKE on-prem 1.9 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.9 on nested VMware environment

Kubernetes: Anthos GKE on-prem 1.8 on nested VMware environment

Update Dec 2021: I have written an updated version of this article for vCenter 7.0U1 and Anthos 1.9. Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a Kubernetes: Anthos GKE on-prem 1.8 on nested VMware environment

Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Managing certificates is one of the most mundane, yet critical chores in the maintenance of environments.   However, this manual maintenance can be off-loaded to cert-manager on Kubernetes. In this article, we will use cert-manager to generate TLS certs for a public NGINX ingress using Let’s Encrypt.   The primary ingress will have two different hosts using Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Updated Aug 2023: tested with Kubernetes 1.25 and ingress-nginx 1.8.1 Creating a Kubernetes cluster on DigitalOcean can be done manually using its web Control Panel, but for automation purposes it is better to use Terraform. In this article, we will use Terraform to create a Kubernetes cluster on DigitalOcean infrastructure. We will then use helm Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Istio: Canary upgrade of Operator from Istio 1.8 directly to 1.10

Istio announced it will support upgrades jumping directly from 1.8 to 1.10, instead of forcing an intermediate upgrade through 1.9. In this article, I will show you how to do a canary upgrade from a 1.8 operator to 1.10 operator without affecting end user traffic.  We will incorporate the new 1.10 concept of revision tags Istio: Canary upgrade of Operator from Istio 1.8 directly to 1.10

Istio: Upgrading from Istio 1.7 operator without revision to fully revisioned control plane

Istio 1.7 has the ability to do canary upgrades for revisioned control planes and operators, but if you did your initial installation without the ‘revision’ flag, then you’ll need to apply these settings. In this article, I will show you how to go from an non-revisioned 1.7.5 Istio operator and control plane to a 1.7.5 Istio: Upgrading from Istio 1.7 operator without revision to fully revisioned control plane

Istio: Upgrading from Istio 1.6 operator without revision to 1.7 fully revisioned control plane

Istio has the ability to do canary upgrades for revisioned control planes, but it was only in 1.7 that the Operator itself got  support for the ‘revision’ flag. In this article, I will show you how to go from an non-revisioned 1.6.6 Istio operator and control plane to a 1.7 revisioned operator and control plane Istio: Upgrading from Istio 1.6 operator without revision to 1.7 fully revisioned control plane

Kubernetes: pulling out the ready status of individual containers using kubectl

kubectl will give you a sythesized column showing how many container instances in a pod are READY with the default ‘get pods’ command.  But if you are dealing with json output and need this information, then you can extract it using jsonpath or jq. Here is an example output from ‘get pods’ showing the READY Kubernetes: pulling out the ready status of individual containers using kubectl

Kubernetes: adding and removing pod template annotations using kubectl

Although ‘kubectl annotate‘ will set an annotation on a  object directly, it will not set the annotation on the more deeply nested pod template for a Deployment or Daemonset. If you want to quickly set the annotation on a pod template (.spec.template.metadata.annotations) without modifying the full manifest, you can  use the ‘patch‘ command.  As a Kubernetes: adding and removing pod template annotations using kubectl

Kubernetes: K3s with multiple Istio ingress gateways

By default, K3s uses the Traefik ingress controller and Klipper service load balancer to expose services.  But this can be replaced with a MetalLB load balancer and Istio ingress controller. K3s is perfectly capable of handling Istio operators, gateways, and virtual services if you want the advanced policy, security, and observability offered by Istio. In Kubernetes: K3s with multiple Istio ingress gateways

Kubernetes: K3s with multiple metalLB endpoints and nginx ingress controllers

Updated March 2023: using K3s 1.26 and MetalLB 0.13.9 By default, K3s uses the Traefik ingress controller and Klipper service load balancer to expose services.  But this can be replaced with a MetalLB load balancer and NGINX ingress controller. But a single NGINX ingress controller is sometimes not sufficient.  For example, the primary ingress may Kubernetes: K3s with multiple metalLB endpoints and nginx ingress controllers

Kubernetes: Anthos GKE on-prem 1.4 on nested VMware environment

Update Dec 2021: I have written an updated version of this article for vCenter 7.0U1 and Anthos 1.8. Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters.  This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a Kubernetes: Anthos GKE on-prem 1.4 on nested VMware environment

Kubernetes: microk8s with multiple Istio ingress gateways

microk8s has convenient out-of-the-box support for MetalLB and an NGINX ingress controller.  But microk8s is also perfectly capable of handling Istio operators, gateways, and virtual services if you want the advanced policy, security, and observability offered by Istio. In this article, we will install the Istio Operator, and allow it to create the Istio Ingress Kubernetes: microk8s with multiple Istio ingress gateways

Kubernetes: microk8s with multiple metalLB endpoints and nginx ingress controllers

Out-of-the-box, microk8s has add-ons that make it easy to enable MetalLB as a network load balancer as well as an NGINX ingress controller. But a single ingress controller is often not sufficient.  For example, the primary ingress may be serving up all public traffic to your customers.  But a secondary ingress might be necessary to Kubernetes: microk8s with multiple metalLB endpoints and nginx ingress controllers

Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

microk8s is a lightweight Kubernetes deployment by Canonical that is enterprise-grade, yet also compact enough to run on development boxes and edge devices. In this article, I will show you how to deploy a  three-node microk8s cluster on Ubuntu nodes that are created using Terraform and a local KVM libvirt provider. This article focuses on Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

Docker: building an ntp server image with Alpine and chrony

If you need a lightweight NTP server, an Alpine based container image with a chrony daemon takes up minimal runtime resources and is about 8Mb in size. I have pushed ‘fabianlee/docker-chrony-alpine‘ to docker hub.  The run command requires that you specify linux capabilities and a volume for the chrony.conf file, so the easiest way to Docker: building an ntp server image with Alpine and chrony

Kubernetes: Using Downward API metadata from a GoLang application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized GoLang based application.

Kubernetes: Using Downward API metadata from a Python application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized Python based application.

Kubernetes: using the Downward API to access pod/container metadata

The Kubernetes Downward API allows a pod to get access to metadata about itself and the cluster without creating a tight coupling to the Kubernetes API.  For example, information such as pod name, labels, annotations, IP address, node, and cpu/memory limits can be made available inside the pod. In this article, I’ll show how to Kubernetes: using the Downward API to access pod/container metadata

GCP: pulling an image from the Container Registry of another project

In a previous article I discussed the advantages to keeping container images in the private Google Container Registry of a project.  And if you have a GKE cluster in the exact same project, then image pulls happen seamlessly without any additional configuration required. However, if the GKE cluster is in a different project than the GCP: pulling an image from the Container Registry of another project

GCP: pushing GKE images into gcr.io to avoid pull rate limits

Docker hub now enforces pull rate limits (since November 2020).  And unfortunately, this limit is often reached at critical moments such as upgrades or infrastructure events when bulk pod recreation is happening. One way to avoid this problem is to place your images into an alternate image registry.  This could mean a lot of work GCP: pushing GKE images into gcr.io to avoid pull rate limits

Kubernetes: detecting the installed version of nginx ingress

If you need to determine the version of the nginx ingress controller deployed, then you can invoke the ingress controller binary with the ‘–version’ flag. But this binary is located in the ingress-nginx-controller pod, so do a ‘kubectl exec’ like below. # show all running nginx ingress pods kubectl get pods -n $ingress_ns -l app.kubernetes.io/name=ingress-nginx Kubernetes: detecting the installed version of nginx ingress

Kubernetes: testing pod communication directly from istio sidecar proxy

Once you introduce an istio sidecar proxy into your deployment, it becomes another point at which you might need to troubleshoot network connectivity to the primary container. Assuming you have deployed a pod with an app label “helloworld” in the default namespace listening on port 5000, you can use a command like the following to Kubernetes: testing pod communication directly from istio sidecar proxy

Kubernetes: istio Gateway in a different namespace than VirtualService

If your istio ingress Gateway is in a different namespace than your VirtualService, then you need to make sure you prefix the gateway reference with that namespace. For example, if your istio ingress Gateway is in the ‘default’ namespace, yet your Deployment, Service, and VirtualService are in the namespace ‘helloworld’. apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: Kubernetes: istio Gateway in a different namespace than VirtualService

Kubernetes: Updating an existing ConfigMap using kubectl replace

Creating a ConfigMap using ‘kubectl create configmap’ is a straightforward operation.   However, there is not a corresponding ‘kubectl apply’ that can easily update that ConfigMap. As an example, here are the commands for the creation of a simple ConfigMap using a file named “ConfigMap-test1.yaml“. $ cat ConfigMap-test1.yaml test1: foo: bar # create and then show Kubernetes: Updating an existing ConfigMap using kubectl replace

CloudFoundry: Determining buildpack used by application

The “cf app” command will provide a brief expansion of a buildpack’s settings, but does not provide an exact buildpack name.  Luckily, this can easily be pulled using “cf curl” and the CloudFoundry API. Assuming you have the jq utility for parsing/querying json output: # set name of cloudfoundry app app=”my-cf-app” # using name, pull CloudFoundry: Determining buildpack used by application