Fabian

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.10 to 1.11

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from Anthos 1.10 to 1.11 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.10 installation Kubernetes: major version upgrade of Anthos GKE on-prem from 1.10 to 1.11

Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production. If you are running these types of workloads on a production Kubernetes cluster, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a series of previous articles, we Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Python: New Relic instrumentation for Flask app deployed with Gunicorn

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production.  If you are running these types of workloads in production, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a previous article, we created a Docker image of Python: New Relic instrumentation for Flask app deployed with Gunicorn

Python: Building an image for a Flask app served from Gunicorn

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production.  Running Flask applications directly is great for development and testing of the basic request/response flow, but you need gunicorn to handle production level loads,  concurrency, logging, and timeouts. In this article, I will show you how to build a Docker image Python: Building an image for a Flask app served from Gunicorn

GCP: Moving a VM instance to a different region using snapshots

The ‘gcloud compute instances move‘ command is convenient for moving VM instances from one region to another, but only works within a narrow scope of OS image types and disks. For example, only older non-UEFI OS images can be moved with this command. Trying to move even the simplest Ubuntu bionic/focal or Debian bullseye/buster VM GCP: Moving a VM instance to a different region using snapshots

GCP: Enable Policy Controller on a GKE cluster

Anthos Policy Controller enables enforcement of compliance, security, and organizational policies on GKE clusters. These might be best-practice policies coming from internal Architectural standards, or technical policies used to define/constrain resources, or audit requirements stemming from legal regulation. Anthos Policy Controller is built upon the open-source Open Policy Agent (OPA) Gatekeeper, which uses a Kubernetes GCP: Enable Policy Controller on a GKE cluster

Ubuntu: install latest git client from PPA to fix ‘unsafe repository’ errors

Since the announcement of CVE-2022-24765, newer git clients from the Ubuntu security and archive package repositories may throw errors about “unsafe repository … is owned by someone else” if directories are not owned by your personal user id. First, try to resolve the issue by running the command suggested in the error message. # attempt Ubuntu: install latest git client from PPA to fix ‘unsafe repository’ errors

Kubernetes: kustomize with Helm charts

kustomize is typically used to overlay a base set of yaml, but it also has the ability to leverage existing Helm charts, and overlay a set of custom values with HelmChartInflationGenerator. In this article, I will use kustomize to deploy the Bitnami NGINX Helm chart with overridden values that provide a customized nginx.conf and custom Kubernetes: kustomize with Helm charts

Kubernetes: kustomize transformations with patchesStrategicMerge

The power of kustomize lies in its ability to transform yaml, and one of the methods for this is  patchesStrategicMerge. Where the strategic merge patch excels is in inserting elements and replacing values, allowing you to specify the desired patch using the same indentation level as the target, which makes the intended result very intuitive. Kubernetes: kustomize transformations with patchesStrategicMerge

Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes has a rich way of expressing volumes/ volumeMounts for mounting files, emptyDir for ephemeral directories, and env/envFrom for adding environment variables to your container definition running on a Kubernetes cluster. However, if you are actively iterating on the development of an image, it may slow you down to require a deployment to a remote Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes: kustomize overlay to enrich a base resource

With kustomize built into the kubectl CLI since version 1.14, there is little reason not to take advantage of this overlay system to deploy components to your Kubernetes cluster. Kustomize has the advantage that it is purpose built to understand and validate yaml and Kubernetes CRD, as opposed to bespoke templating solutions using sed/envsubst, Ansible, Kubernetes: kustomize overlay to enrich a base resource

GCP: Cloud Function to handle requests to HTTPS LB during maintenance

At some point you may need to schedule a maintenance window for your solution  But that doesn’t mean the end-user traffic or client integrations will stop requesting the services from the GCP external HTTPS LB that fronts all client requests. The VM instances and GKE clusters that normally respond to requests may not be able GCP: Cloud Function to handle requests to HTTPS LB during maintenance

GCP: Deploying a 2nd gen Python Cloud Function and exposing from an HTTPS LB

GCP Cloud Functions have taken a step forward with the 2nd generation release.  One of the biggest architectural differences is that now multiple request can run concurrently on a single instance, enabling large traffic loads. In this article, I will show you how to deploy a simple Python Flask web server as a 2nd gen GCP: Deploying a 2nd gen Python Cloud Function and exposing from an HTTPS LB

GCP: global external HTTPS LB for securely exposing insecure VM services

If you have unmanaged GCP VM instances running services on insecure ports (e.g. Apache HTTP on port 80), one way to secure the public external traffic is to create an external GCP HTTPS load balancer. Conceptually, we want to expose a secure front to otherwise insecure services. While the preferred method would be to secure GCP: global external HTTPS LB for securely exposing insecure VM services

GCP: internal HTTPS LB for securely exposing insecure VM services

If you have unmanaged GCP VM instances running services on insecure ports (e.g. Apache HTTP on port 80), one way to secure the internal communication coming from other internal pods/apps is to create an internal GCP HTTPS load balancer. Conceptually, we want to expose a secure front to otherwise insecure services. While the preferred method GCP: internal HTTPS LB for securely exposing insecure VM services

GCP: serving a maintenance page using an HTTPS LB and container native routing

No matter how highly available your services, there may still be significant backend events that require planned maintenance.  During this downtime, you should still reply to end users and service integrations with a proper response. In this article, I will show you how to configure your GCP HTTPS Loadbalancer so that a single maintenance service GCP: serving a maintenance page using an HTTPS LB and container native routing

Kubernetes: emptying the finalizers for a namespace that will not delete

If your intent is to delete all the objects in a namespace, but the command is not completing, emptying the namespace finalizer will often allow the deletion to finish. For example, if you have tried deleting the “my-namespace” like below and it will not complete. kubectl delete ns my-namespace –force –grace-period=0 Then as written by Kubernetes: emptying the finalizers for a namespace that will not delete

GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

It is not necessary to create an independent GCP HTTPS LB or other improvisation to redirect insecure HTTP traffic to your HTTPS load balancer.  The existing public Ingress can reference a FrontendConfig object that specifies redirection to HTTPS. Below is a FrontendConfig definition that can redirect the insecure traffic. apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

GCP: Private GKE cluster in Autopilot mode using Terraform

GKE Autopilot reduces the operational costs of managing GKE clusters by freeing you from node level maintenance, instead focusing just on pod workloads.  Costs are accrued based on pod resource consumption and not on node resource sizes or node count, which are managed by Google. Since you no longer own the node level, there are GCP: Private GKE cluster in Autopilot mode using Terraform

GCP: Private GKE Cluster with Anthos Service Mesh exposing services

As opposed to public GKE clusters which have their IP addresses exposed, private GKE clusters use private internal IP addresses.  This provides an enhanced security stance, but also means we need a solution such as Anthos Service Mesh to explicitly expose our services. In our previous article, we built a private GKE cluster using Terraform.  GCP: Private GKE Cluster with Anthos Service Mesh exposing services

Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

If you delete the entire nginx namespace and reinstall again via helm chart, your nginx admission controller may throw a “x509 certificate signed by unknown authority” message when you attempt to create an nginx ingress. This will happen regardless if the ingress is using http only or secure https.  And also whether or not the Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

Kubernetes: using kubectl to wait for condition of pods, deployments, services

Instead of deploying a pod or service and periodically checking its status for readiness, or having your automation scripts wait for a certain number of seconds before moving to the next operation, it is much cleaner to use ‘kubectl wait’ to sense completion. Wait for pod Here is how you would wait for READY status Kubernetes: using kubectl to wait for condition of pods, deployments, services