Kubernetes

Prometheus: adding a Grafana dashboard using a ConfigMap

If your Grafana deployment is using a sidecar to watch for new dashboards defined as a ConfigMap, then adding a dashboard is a dynamic operation that can be done without even restarting the pod. If you have deployed the Prometheus/Grafana stack with kube-prometheus-stack, then you can check for the existence of the ‘grafana-sc-dashboard’ sidecar using: Prometheus: adding a Grafana dashboard using a ConfigMap

Java: build OCI compatible image for Spring Boot web app using jib

While working on your Spring Boot web application locally, gradle provides the ‘bootRun’ for a quick development lifecycle and ‘bootJar’ for packaging all the dependencies as a single jar deliverable. But for most applications these days, you will need this packaged into an OCI compatible (i.e. Docker) image for its ultimate deployment to an orchestrator Java: build OCI compatible image for Spring Boot web app using jib

Prometheus: external template for AlertManager html email with kube-prometheus-stack

The kube-prometheus-stack bundles AlertManager for taking action on Prometheus alerts. And if you are customizing the Heml custom values file to configure email alerting, there are multiple options available.  The simplest is to allow the system to fallback to using the default subject and html templates. But if you need to tailor the email content Prometheus: external template for AlertManager html email with kube-prometheus-stack

Prometheus: exposing Prometheus/Grafana as Ingress for kube-prometheus-stack

The kube-prometheus-stack bundles Prometheus, Grafana, and AlertManager for monitoring a Kubernetes cluster. By default, the Ingress of these services is disabled.  In this article I will show you how to expose these services with NGINX Ingress either via subdomain (e.g. prometheus.my.domain) or web context (e.g. my.domain/prometheus). You would not want to expose these monitoring applications Prometheus: exposing Prometheus/Grafana as Ingress for kube-prometheus-stack

Prometheus: installing kube-prometheus-stack on K3s cluster

The kube-prometheus-stack bundles the Prometheus Operator, monitors/rules, Grafana dashboards, and AlertManager needed to monitor a Kubernetes cluster. But there are customizations necessary to tailor the Helm installation for K3s, a lightweight Kubernetes installation. In this article, I will detail the necessary modifications to deploy a healthy monitoring stack on a K3s cluster.

Kubernetes: targeting the addition of array items to a multi-document yaml manifest

If you have a Kubernetes yaml manifest that contains multiple documents, targeting a single document for modification while still outputting the other documents untouched can be a challenge. As an example, consider the simple example below were you have a single yaml file that contains: a Namespace, Deployment, and DaemonSet.  And we want to add Kubernetes: targeting the addition of array items to a multi-document yaml manifest

Kubernetes: liveness probe for Spring Boot with custom Actuator health check

A Kubernetes liveness and readiness probe is how the kubelet determines health of a pod.  This is often times as simple as checking the ability to reach the main service port over TCP or HTTP. But if you are using Spring Boot and have enabled the Actuator dependency, you have the ability to create even Kubernetes: liveness probe for Spring Boot with custom Actuator health check

GCP: Enable HttpLoadBalancing feature on Cluster to avoid errors when applying BackEndConfig

If you are configuring Istio/ASM ingress gateways with a BackendConfig for specifying health checks, timeouts, or Cloud Armor policies, then you need to ensure that your GKE cluster has the HttpLoadBalancing feature enabled. If this feature is not enabled, you will see an error message like below when attempting to apply the BackendConfig manifest: unable GCP: Enable HttpLoadBalancing feature on Cluster to avoid errors when applying BackEndConfig

Kubernetes: retrieving services and pods network CIDR block from cluster

When configuring networks and loadbalancers, sometimes you need the network CIDR block used by Services of a Kubernetes cluster.  There are various ways to pull this information from different Kubernetes implementations, but one trick that works across implementations is looking at the error message from kubectl if you attempt to create a service at an Kubernetes: retrieving services and pods network CIDR block from cluster

GCP: Enabling autoUpgrade for node-pools to reduce manual maintenance

GKE cluster upgrades do not need to be a manual process.  GKE clusters can be auto upgraded by subscribing the cluster to an appropriate release channel and assigning a sensible maintenance window.  As long as adequate pod disruption budgets, replicas, and ingress are configured, these upgrades can happen without interrupting  availability. To check the current GCP: Enabling autoUpgrade for node-pools to reduce manual maintenance

Kubernetes: Anthos GKE on-prem 1.11 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.11 on nested VMware environment

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.10 to 1.11

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from Anthos 1.10 to 1.11 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.10 installation Kubernetes: major version upgrade of Anthos GKE on-prem from 1.10 to 1.11

Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production. If you are running these types of workloads on a production Kubernetes cluster, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a series of previous articles, we Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Kubernetes: kustomize with Helm charts

kustomize is typically used to overlay a base set of yaml, but it also has the ability to leverage existing Helm charts, and overlay a set of custom values with HelmChartInflationGenerator. In this article, I will use kustomize to deploy the Bitnami NGINX Helm chart with overridden values that provide a customized nginx.conf and custom Kubernetes: kustomize with Helm charts

Kubernetes: kustomize transformations with patchesStrategicMerge

The power of kustomize lies in its ability to transform yaml, and one of the methods for this is  patchesStrategicMerge. Where the strategic merge patch excels is in inserting elements and replacing values, allowing you to specify the desired patch using the same indentation level as the target, which makes the intended result very intuitive. Kubernetes: kustomize transformations with patchesStrategicMerge

Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes has a rich way of expressing volumes/ volumeMounts for mounting files, emptyDir for ephemeral directories, and env/envFrom for adding environment variables to your container definition running on a Kubernetes cluster. However, if you are actively iterating on the development of an image, it may slow you down to require a deployment to a remote Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes: kustomize overlay to enrich a base resource

With kustomize built into the kubectl CLI since version 1.14, there is little reason not to take advantage of this overlay system to deploy components to your Kubernetes cluster. Kustomize has the advantage that it is purpose built to understand and validate yaml and Kubernetes CRD, as opposed to bespoke templating solutions using sed/envsubst, Ansible, Kubernetes: kustomize overlay to enrich a base resource

Kubernetes: emptying the finalizers for a namespace that will not delete

If your intent is to delete all the objects in a namespace, but the command is not completing, emptying the namespace finalizer will often allow the deletion to finish. For example, if you have tried deleting the “my-namespace” like below and it will not complete. kubectl delete ns my-namespace –force –grace-period=0 Then as written by Kubernetes: emptying the finalizers for a namespace that will not delete

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.9 to 1.10

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. In this article, I will be following the steps required to upgrade from 1.9 to 1.10 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.9 installation article.

Kubernetes: major version upgrade of Anthos GKE on-prem from 1.8 to 1.9

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters.  In this article, I will be following the steps required to upgrade from 1.8 to 1.9 on VMware. The instructions provided here are assuming you have used the Ansible scripts and Seed VM described in my previous Anthos 1.8 installation article.