kubernetes

Kubernetes: KSA must now create secret/token manually as of Kubernetes 1.24

Before Kubernetes 1.24, the creation of a KSA (Kubernetes Service Account) would also create a non-expiring secret, where the token controller would generate a token that could be used to authenticate into the API server. As a quick example of the legacy behavior on Kubernetes < 1.24, notice how the creation of a service account Kubernetes: KSA must now create secret/token manually as of Kubernetes 1.24

Kubernetes: Keycloak IAM deployed into Kubernetes cluster for OAuth2/OIDC

Keycloak is an open-source Identity and Access Management (IAM) solution that can be used to provide authentication and authorization to your enterprise applications.  One of the many protocols it supports is OAuth2/OIDC. One of the easiest ways to deploy Keycloak is directly into your Kubernetes cluster, exposed securely with an NGINX Ingress. In this article, Kubernetes: Keycloak IAM deployed into Kubernetes cluster for OAuth2/OIDC

Kubernetes: targeting the addition of array items to a multi-document yaml manifest

If you have a Kubernetes yaml manifest that contains multiple documents, targeting a single document for modification while still outputting the other documents untouched can be a challenge. As an example, consider the simple example below were you have a single yaml file that contains: a Namespace, Deployment, and DaemonSet.  And we want to add Kubernetes: targeting the addition of array items to a multi-document yaml manifest

Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production. If you are running these types of workloads on a production Kubernetes cluster, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a series of previous articles, we Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Kubernetes: NFS mount using dynamic volume and Storage Class

If you have an external NFS export and want to share that with a pod/deployment, you can leverage the nfs-subdir-external-provisioner  to create a storageclass that can dynamically create the persistent volume. In contrast to manually creating the persistent volume and persistent volume claim, this dynamic method cedes the lifecycle of the persistent volume over to Kubernetes: NFS mount using dynamic volume and Storage Class

Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Managing certificates is one of the most mundane, yet critical chores in the maintenance of environments.   However, this manual maintenance can be off-loaded to cert-manager on Kubernetes. In this article, we will use cert-manager to generate TLS certs for a public NGINX ingress using Let’s Encrypt.   The primary ingress will have two different hosts using Kubernetes: LetsEncrypt certificates using HTTP and DNS solvers on DigitalOcean

Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Updated Aug 2023: tested with Kubernetes 1.25 and ingress-nginx 1.8.1 Creating a Kubernetes cluster on DigitalOcean can be done manually using its web Control Panel, but for automation purposes it is better to use Terraform. In this article, we will use Terraform to create a Kubernetes cluster on DigitalOcean infrastructure. We will then use helm Terraform: creating a Kubernetes cluster on DigitalOcean with public NGINX ingress

Kubernetes: microk8s with multiple metalLB endpoints and nginx ingress controllers

Out-of-the-box, microk8s has add-ons that make it easy to enable MetalLB as a network load balancer as well as an NGINX ingress controller. But a single ingress controller is often not sufficient.  For example, the primary ingress may be serving up all public traffic to your customers.  But a secondary ingress might be necessary to Kubernetes: microk8s with multiple metalLB endpoints and nginx ingress controllers

Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

microk8s is a lightweight Kubernetes deployment by Canonical that is enterprise-grade, yet also compact enough to run on development boxes and edge devices. In this article, I will show you how to deploy a  three-node microk8s cluster on Ubuntu nodes that are created using Terraform and a local KVM libvirt provider. This article focuses on Kubernetes: microk8s cluster on Ubuntu using terraform and libvirt

Kubernetes: Using Downward API metadata from a Python application

In a previous post, I described the Kubernetes Downward API and how it allows us to inject pod/container metadata into our runtime container. In this article, I’ll show how you can read the environment variables and mounted files from inside a containerized Python based application.

Kubernetes: detecting the installed version of nginx ingress

If you need to determine the version of the nginx ingress controller deployed, then you can invoke the ingress controller binary with the ‘–version’ flag. But this binary is located in the ingress-nginx-controller pod, so do a ‘kubectl exec’ like below. # show all running nginx ingress pods kubectl get pods -n $ingress_ns -l app.kubernetes.io/name=ingress-nginx Kubernetes: detecting the installed version of nginx ingress

Kubernetes: testing pod communication directly from istio sidecar proxy

Once you introduce an istio sidecar proxy into your deployment, it becomes another point at which you might need to troubleshoot network connectivity to the primary container. Assuming you have deployed a pod with an app label “helloworld” in the default namespace listening on port 5000, you can use a command like the following to Kubernetes: testing pod communication directly from istio sidecar proxy

Kubernetes: istio Gateway in a different namespace than VirtualService

If your istio ingress Gateway is in a different namespace than your VirtualService, then you need to make sure you prefix the gateway reference with that namespace. For example, if your istio ingress Gateway is in the ‘default’ namespace, yet your Deployment, Service, and VirtualService are in the namespace ‘helloworld’. apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: Kubernetes: istio Gateway in a different namespace than VirtualService

Kubernetes: running Minikube locally on Ubuntu using VirtualBox

Updated article to latest Minikube, Feb 2019 Minikube is a tool that runs a Kubernetes stack inside a single VM being run by a local virtualization engine such as VirtualBox.  This makes it ideal for local development and experimentation. In this article we’ll be going through installation and validation of a Minikube installation on Ubuntu Kubernetes: running Minikube locally on Ubuntu using VirtualBox