Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes has a rich way of expressing volumes/ volumeMounts for mounting files, emptyDir for ephemeral directories, and env/envFrom for adding environment variables to your container definition running on a Kubernetes cluster. However, if you are actively iterating on the development of an image, it may slow you down to require a deployment to a remote Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes: kustomize overlay to enrich a base resource

With kustomize built into the kubectl CLI since version 1.14, there is little reason not to take advantage of this overlay system to deploy components to your Kubernetes cluster. Kustomize has the advantage that it is purpose built to understand and validate yaml and Kubernetes CRD, as opposed to bespoke templating solutions using sed/envsubst, Ansible, Kubernetes: kustomize overlay to enrich a base resource

GCP: Cloud Function to handle requests to HTTPS LB during maintenance

At some point you may need to schedule a maintenance window for your solution  But that doesn’t mean the end-user traffic or client integrations will stop requesting the services from the GCP external HTTPS LB that fronts all client requests. The VM instances and GKE clusters that normally respond to requests may not be able GCP: Cloud Function to handle requests to HTTPS LB during maintenance

GCP: Deploying a 2nd gen Python Cloud Function and exposing from an HTTPS LB

GCP Cloud Functions have taken a step forward with the 2nd generation release.  One of the biggest architectural differences is that now multiple request can run concurrently on a single instance, enabling large traffic loads. In this article, I will show you how to deploy a simple Python Flask web server as a 2nd gen GCP: Deploying a 2nd gen Python Cloud Function and exposing from an HTTPS LB

GCP: global external HTTPS LB for securely exposing insecure VM services

If you have unmanaged GCP VM instances running services on insecure ports (e.g. Apache HTTP on port 80), one way to secure the public external traffic is to create an external GCP HTTPS load balancer. Conceptually, we want to expose a secure front to otherwise insecure services. While the preferred method would be to secure GCP: global external HTTPS LB for securely exposing insecure VM services

GCP: internal HTTPS LB for securely exposing insecure VM services

If you have unmanaged GCP VM instances running services on insecure ports (e.g. Apache HTTP on port 80), one way to secure the internal communication coming from other internal pods/apps is to create an internal GCP HTTPS load balancer. Conceptually, we want to expose a secure front to otherwise insecure services. While the preferred method GCP: internal HTTPS LB for securely exposing insecure VM services

GCP: serving a maintenance page using an HTTPS LB and container native routing

No matter how highly available your services, there may still be significant backend events that require planned maintenance.  During this downtime, you should still reply to end users and service integrations with a proper response. In this article, I will show you how to configure your GCP HTTPS Loadbalancer so that a single maintenance service GCP: serving a maintenance page using an HTTPS LB and container native routing

Kubernetes: emptying the finalizers for a namespace that will not delete

If your intent is to delete all the objects in a namespace, but the command is not completing, emptying the namespace finalizer will often allow the deletion to finish. For example, if you have tried deleting the “my-namespace” like below and it will not complete. kubectl delete ns my-namespace –force –grace-period=0 Then as written by Kubernetes: emptying the finalizers for a namespace that will not delete

GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

It is not necessary to create an independent GCP HTTPS LB or other improvisation to redirect insecure HTTP traffic to your HTTPS load balancer.  The existing public Ingress can reference a FrontendConfig object that specifies redirection to HTTPS. Below is a FrontendConfig definition that can redirect the insecure traffic. apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

GCP: Private GKE cluster in Autopilot mode using Terraform

GKE Autopilot reduces the operational costs of managing GKE clusters by freeing you from node level maintenance, instead focusing just on pod workloads.  Costs are accrued based on pod resource consumption and not on node resource sizes or node count, which are managed by Google. Since you no longer own the node level, there are GCP: Private GKE cluster in Autopilot mode using Terraform

GCP: Private GKE Cluster with Anthos Service Mesh exposing services

As opposed to public GKE clusters which have their IP addresses exposed, private GKE clusters use private internal IP addresses.  This provides an enhanced security stance, but also means we need a solution such as Anthos Service Mesh to explicitly expose our services. In our previous article, we built a private GKE cluster using Terraform.  GCP: Private GKE Cluster with Anthos Service Mesh exposing services

Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

If you delete the entire nginx namespace and reinstall again via helm chart, your nginx admission controller may throw a “x509 certificate signed by unknown authority” message when you attempt to create an nginx ingress. This will happen regardless if the ingress is using http only or secure https.  And also whether or not the Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

Kubernetes: using kubectl to wait for condition of pods, deployments, services

Instead of deploying a pod or service and periodically checking its status for readiness, or having your automation scripts wait for a certain number of seconds before moving to the next operation, it is much cleaner to use ‘kubectl wait’ to sense completion. Wait for pod Here is how you would wait for READY status Kubernetes: using kubectl to wait for condition of pods, deployments, services

Helm: Installing Helm on Ubuntu

Update Aug 2023: using newer ‘signed-by’ attribute for apt signing keys. Installing Helm using apt is a straight-forward procedure and documented on the official site.  Coming straight from the official helm documentation, here are the commands for Ubuntu 22. curl https://baltocdn.com/helm/signing.asc | gpg –dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo chmod 644 /usr/share/keyrings/helm.gpg sudo Helm: Installing Helm on Ubuntu

Kubernetes: running a mail container for testing email during development

If you are in the development lifecycle and need to quickly test email functionality, you can deploy the codecentric/mailhog image directly within Kubernetes.  It will receive all email regardless of address, and from its web interface show you all the email that has been received. In this article, I will show you how to deploy Kubernetes: running a mail container for testing email during development

Kubernetes: NFS mount using dynamic volume and Storage Class

If you have an external NFS export and want to share that with a pod/deployment, you can leverage the nfs-subdir-external-provisioner  to create a storageclass that can dynamically create the persistent volume. In contrast to manually creating the persistent volume and persistent volume claim, this dynamic method cedes the lifecycle of the persistent volume over to Kubernetes: NFS mount using dynamic volume and Storage Class

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.9 to 1.10

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from 1.9 to 1.10 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.9 installation article.

Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.10 on nested VMware environment

Ubuntu: Running a bash script periodically with a user-level Systemd timer

If you have a Bash script that needs to run periodically, you can run it using a crontab entry.  But you can also have it invoked by Systemd using systemd.timer. Furthermore, you can run Systemd services as  user-level services instead of the typical system-level service for even further isolation. Running via Systemd provides more powerful Ubuntu: Running a bash script periodically with a user-level Systemd timer

Ubuntu: Running a bash script periodically with a system-level Systemd timer

If you have a Bash script that needs to run periodically, you can run it using a crontab entry or file.  But you can also have it invoked from Systemd using systemd.timer. Running via Systemd provides more powerful constructs for invocation, configuration, monitoring, and logging.  In this article, I will show how to periodically run Ubuntu: Running a bash script periodically with a system-level Systemd timer

Python: constructing a DataFrame from a relational database with pandas

If the dataset you want to analyze with pandas is coming from a normalized relational database, then you can use ‘pandas.read_sql‘ to pull the data directly. In this article, we will deploy a small MariaDB instance with Docker and show how we can create DataFrame directly from a single table or from a join between Python: constructing a DataFrame from a relational database with pandas

Docker: Installing Docker CE on Ubuntu focal 20.04

Docker is a container platform that streamlines software delivery and provides isolation, scalability, and efficiency with less overhead than OS level virtualization. These instructions are taken directly from the official Docker for Ubuntu page, but I wanted to reiterate those tasks essential for installing the Docker Community Edition on Ubuntu focal 20.04. If you want Docker: Installing Docker CE on Ubuntu focal 20.04