Kubernetes

Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

The Vault Secrets Operator is a Vault integration that runs inside a Kubernetes cluster and synchronizes Vault-level secrets to Kubernetes-level secrets. This secret synchronization happens transparently to the running workloads, without any need to retrofit existing images or manifests. In this article, I will show how to: Install the Vault Secrets Operator (VSO) Configure the Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

minikube: exposing a deployment using ingress with secure TLS

minikube makes it easy to spin up a local Kubernetes cluster, and adding an Ingress is convenient with its built-in Addons. In this article, I want to take it one step further and show how to use a custom key/certificate to expose a service using TLS (secure https).

Vault: JWT authentication mode with multiple roles to isolate secrets

In this article, I will detail how to use Vault JWT auth mode to isolate the secrets of two different deployments in the same Kubernetes cluster.  This will be done by using two different Kubernetes Service Accounts, each of which generates unique JWT that are tied to a different Vault role. JWT auth mode is Vault: JWT authentication mode with multiple roles to isolate secrets

Vault: NodeJS Express web app using node-vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a NodeJS Express web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the node-vault module. This Vault: NodeJS Express web app using node-vault to fetch secrets

Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a Java Spring Boot web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the Spring Cloud Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. It has tight integrations with Kubernetes that allows containers to fetch secrets without requiring hardcoding them into environment variables, files, or external services. The official docs already provide usage scenarios, so Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

Helm: automated publishing of Helm repo with Github Actions

In a previous article, I described how to expose a Github source repo as a public Helm repository by enabling Github Pages and running the chart-releaser utility. In this article, I want to remove the manual invocation of the chart-releaser, and instead place that into an Github Actions workflow that automatically publishes changes to the Helm: automated publishing of Helm repo with Github Actions

Helm: manually publishing Helm repo on Github using chart-releaser

The only requirement for a public Helm chart repository is that it exposes a URL named “index.yaml”.   So by adding a file named “index.yaml” to source control and enabling Github Pages to serve the file over HTTPS, you have the minimal basis for a public Helm chart repository. The backing Chart content (.tgz) can also Helm: manually publishing Helm repo on Github using chart-releaser

Helm: discovering Helm chart releases installed into Kubernetes cluster

If you are administering a Kubernetes cluster that you have inherited or perhaps not visited in a while, then you may need to reacquaint yourself with: which Helm charts are installed into what namespaces, if there are chart updates available, and then what values were used for chart installation. Below are commands that can assist Helm: discovering Helm chart releases installed into Kubernetes cluster

Kubernetes: patching container arguments array with kubectl and jq

The need to configure a specific pod’s container arguments is a common Kubernetes administration task.  As examples, you might need to enable verbose logging, set an explicit value to override a default, or configure a host name or port set in a container’s arguments. In the example below, we are targeting the ‘metrics-server’ in the Kubernetes: patching container arguments array with kubectl and jq

Kubernetes: HorizontalPodAutoscaler evaluation based on Prometheus metric

HorizontalPodAutoscaler (HPA) allow you to dynamically scale the replica count of your Deployment based on basic CPU/memory resource metrics from the metrics-server.  If you want scaling based on more advanced scenarios and you are already using the Prometheus stack, the prometheus-adapter provides this enhancement. The prometheus-adapter takes basic Prometheus metrics, and then synthesizes custom API Kubernetes: HorizontalPodAutoscaler evaluation based on Prometheus metric

Kubernetes: implementing and testing a HorizontalPodAutoscaler

HorizontalPodAutoscaler (HPA) allow you to dynamically scale the replica count of your Deployment based on criteria such as memory or CPU utilization, which make it great way to manage spikes in utilization while still keeping your cluster size and infrastructure costs managed effectively. In order for HPA to evaluate CPU and memory utilization and take Kubernetes: implementing and testing a HorizontalPodAutoscaler

Kubernetes: fixing x509 certificate errors from metric-server on K3s cluster

K3s is deployed by default with a metrics-server, but if you have a multi-node cluster it will fail unless you add the names of all the nodes to the kube-apiserver certificate.  Symptoms of this problem include: metrics-server deployment will throw x509 errors in its log Error when you try to run “kubectl top pods” No Kubernetes: fixing x509 certificate errors from metric-server on K3s cluster

Kubernetes: evaluating full readiness of deployment, daemonset, or pod

Deployments and Daemonset typically have more than one replica or desired replica count, and although kubectl default formatting will return columns summarizing how many are desired and how many are currently ready, an automated script needs to parse these value in order to determine if full health. Similiarly, pod status as well as the readiness Kubernetes: evaluating full readiness of deployment, daemonset, or pod

Terraform: terraform_remote_state to pass values to other configurations

It would be uncommon to have one monolithic Terraform configuration for all the infrastructure in your organization.  More than likely, there are multiple groups and each has responsibility and ownership of certain components (e.g. networking, storage, authorization, Kubernetes). As an example, let’s say your responsibility is the Kubernetes cluster build. You may need the following Terraform: terraform_remote_state to pass values to other configurations

Kubernetes: creating TLS secrets with kustomize using embedded or external content

There are multiple options for creating a TLS secret using kustomize.  One is to embed the certificate content as a base64 string directly in the data, the other is to use an external file. Below is an example kustomization.yaml file that serves as an entry point for both methods. — apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: Kubernetes: creating TLS secrets with kustomize using embedded or external content

GKE: show pod distribution across nodes and zones

Whether you are working on scaling, performance, or high-availability, it can be useful to see exactly which Kubernetes worker node that pods are being scheduled unto. Pods as distributed across worker nodes ns=default kubectl get pods -n $ns -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName Pods as distributed across zones (GKE specific) If you wanted to take it one step further GKE: show pod distribution across nodes and zones

GKE: Determine Anthos on-prem GKE master node and IP address

If you are using Anthos GKE on-premise and need to determine which node of your Admin Cluster is the master, query for the master role.  The label is ‘node-role.kubernetes.io/master’. $ kubectl get nodes -l node-role.kubernetes.io/master NAME STATUS ROLES AGE VERSION gke-admin-master-adfwa Ready control-plane,master 7d v1.24.9-gke.100 # using wide will also show External and Internal IP GKE: Determine Anthos on-prem GKE master node and IP address

Kubernetes: list all pods in deployment

Listing all the pods belonging to a deployment can be done by querying its selectors, but using the deployment’s synthesized replicaset identifier allows for easier automation. # deployment name and namespace deployment_name=mydeployment deployment_ns=mynamespace # get replica set identifier for deployment dep_rs=$(kubectl describe deployment $deployment_name -n $deployment_ns | grep ^NewReplicaSet | awk ‘{print $2}’) # get Kubernetes: list all pods in deployment

Kubernetes: restart a simple pod

A pod belonging to a deployment can be manually deleted, scaled down, or restarted to get a fresh pod.  However, if all you have is a simple pod definition, these actions are not available. One way of restarting the pod is to output its full yaml definition and use ‘kubectl replace’ with the force option. Kubernetes: restart a simple pod

Kubernetes: patch every array element using kubectl and jq

Below is an example using ‘kubectl patch’ to update the securityContext of a single, specific container named ‘my-init-container1’ of the ‘initContainers’ list. kubectl patch deployment my-deployment -n default –patch='{ “spec”: { “template”: { “spec”: { “initContainers”: [ { “name”: “my-init-container1”, “securityContext”: { “runAsUser”: 999 } } ] } } } }’ But ‘initContainers’ is an Kubernetes: patch every array element using kubectl and jq

GCP: fix kubectl auth plugin deprecation warning by installing new auth plugin

Starting with Kubernetes client 1.22, you may start seeing warning messages about your authentication mechanism when running commands.  Here is an example when using gcloud for GKE cluster credentials. WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead. This is because the authentication provider-specific login code will be removed GCP: fix kubectl auth plugin deprecation warning by installing new auth plugin

GCP: LDAP authentication for Anthos VMware clusters using Anthos Identity Service

Anthos Identity Service allows an organization to tie into their existing Identity Provider to authenticate and authorize users into their Anthos clusters. In this article, I will show how the authentication for an Anthos on VMware cluster can be integrated into an existing Active Directory deployment, and further how a user’s AD group membership can GCP: LDAP authentication for Anthos VMware clusters using Anthos Identity Service

Kubernetes: KSA must now create secret/token manually as of Kubernetes 1.24

Before Kubernetes 1.24, the creation of a KSA (Kubernetes Service Account) would also create a non-expiring secret, where the token controller would generate a token that could be used to authenticate into the API server. As a quick example of the legacy behavior on Kubernetes < 1.24, notice how the creation of a service account Kubernetes: KSA must now create secret/token manually as of Kubernetes 1.24

Kubernetes: Anthos GKE on-prem 1.13 on nested VMware environment

Anthos GKE on-prem is a managed platform that brings GKE clusters to on-premise datacenters. This product offering brings best practice security measures, tested paths for upgrades, basic monitoring, platform logging, and full enterprise support. Setting up a platform this extensive requires many steps as officially documented here. However, if you want to practice in a Kubernetes: Anthos GKE on-prem 1.13 on nested VMware environment

Kubernetes: copying files into and out of containers without ‘kubectl cp’

The ‘kubectl cp‘ command is a convenient way to get files into and out of remote containers, however it requires that the ‘tar’ utility be installed inside the container. There are many images that have removed this utility because of the identified security vulnerability, while others have removed it due to the adoption of the Kubernetes: copying files into and out of containers without ‘kubectl cp’

Kubernetes: Keycloak IAM deployed into Kubernetes cluster for OAuth2/OIDC

Keycloak is an open-source Identity and Access Management (IAM) solution that can be used to provide authentication and authorization to your enterprise applications.  One of the many protocols it supports is OAuth2/OIDC. One of the easiest ways to deploy Keycloak is directly into your Kubernetes cluster, exposed securely with an NGINX Ingress. In this article, Kubernetes: Keycloak IAM deployed into Kubernetes cluster for OAuth2/OIDC

Kubernetes: accessing the Kubernetes Dashboard with least privilege

The Kubernetes Dashboard provides a convenient web interface for viewing cluster resources.  However, if you are logged using a token tied to the ‘cluster-admin’ role, you will have privileges beyond what are typically necessary. In this article, I will show you how to create a ServiceAccount and ClusterRole with limited privileges that can be used Kubernetes: accessing the Kubernetes Dashboard with least privilege

Prometheus: installing kube-prometheus-stack on a kubeadm cluster

The kube-prometheus-stack bundles the Prometheus Operator, monitors/rules, Grafana dashboards, and AlertManager needed to monitor a Kubernetes cluster. But there are customizations necessary to tailor the Helm installation for a Kubernetes cluster built using kubeadm. In this article, I will detail the necessary modifications to deploy a healthy monitoring stack on a kubeadm cluster.

Prometheus: monitoring services using additional scrape config for Prometheus Operator

If you are running the Prometheus Operator (e.g. with kube-prometheus-stack) then you can specify additional scrape config jobs to monitor your custom services. An additional scrape config uses regex evaluation to find matching services en masse, and targets a set of services based on label, annotation, namespace, or name. Note that adding an additional scrape Prometheus: monitoring services using additional scrape config for Prometheus Operator

Prometheus: monitoring a custom Service using ServiceMonitor and PrometheusRule

If you are running the Prometheus Operator as part of your monitoring stack (e.g. kube-prometheus-stack) then you can have your custom Service monitored by defining a ServiceMonitor CRD. The ServiceMonitor is an object that defines the service endpoints that should be scraped by Prometheus and at what interval. In this article, we will deploy a Prometheus: monitoring a custom Service using ServiceMonitor and PrometheusRule

Prometheus: adding a Grafana dashboard using a ConfigMap

If your Grafana deployment is using a sidecar to watch for new dashboards defined as a ConfigMap, then adding a dashboard is a dynamic operation that can be done without even restarting the pod. If you have deployed the Prometheus/Grafana stack with kube-prometheus-stack, then you can check for the existence of the ‘grafana-sc-dashboard’ sidecar using: Prometheus: adding a Grafana dashboard using a ConfigMap

Java: build OCI compatible image for Spring Boot web app using jib

While working on your Spring Boot web application locally, gradle provides the ‘bootRun’ for a quick development lifecycle and ‘bootJar’ for packaging all the dependencies as a single jar deliverable. But for most applications these days, you will need this packaged into an OCI compatible (i.e. Docker) image for its ultimate deployment to an orchestrator Java: build OCI compatible image for Spring Boot web app using jib

Prometheus: external template for AlertManager html email with kube-prometheus-stack

The kube-prometheus-stack bundles AlertManager for taking action on Prometheus alerts. And if you are customizing the Heml custom values file to configure email alerting, there are multiple options available.  The simplest is to allow the system to fallback to using the default subject and html templates. But if you need to tailor the email content Prometheus: external template for AlertManager html email with kube-prometheus-stack

Prometheus: exposing Prometheus/Grafana as Ingress for kube-prometheus-stack

The kube-prometheus-stack bundles Prometheus, Grafana, and AlertManager for monitoring a Kubernetes cluster. By default, the Ingress of these services is disabled.  In this article I will show you how to expose these services with NGINX Ingress either via subdomain (e.g. prometheus.my.domain) or web context (e.g. my.domain/prometheus). You would not want to expose these monitoring applications Prometheus: exposing Prometheus/Grafana as Ingress for kube-prometheus-stack