Containers

Github: security scanning built into GitHub Actions image build

Github Actions provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there are custom actions available for running the Trivy container security scanner. In this article, I will show you how to modify your GitHub Action to run the Trivy security scanner against your image, and Github: security scanning built into GitHub Actions image build

GitLab: security scanning built into GitLab Pipelines image build

GitLab Pipelines provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there is a convenient method for doing container security scanning as part of the build process. Include Container Scanning As described in the official documentation, add the following include to your .gitlab-ci.yml pipeline definition. include: GitLab: security scanning built into GitLab Pipelines image build

minikube: installing minikube on Mac with secure TLS ingress

minikube makes it easy to spin up a local Kubernetes cluster on macOS, and adding an Ingress is convenient with its built-in Addons. In this article, I want to take it one step further and show how to expose the Ingress via TLS (secure https) using a custom key/certificate chain.

Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

The Vault Secrets Operator is a Vault integration that runs inside a Kubernetes cluster and synchronizes Vault-level secrets to Kubernetes-level secrets. This secret synchronization happens transparently to the running workloads, without any need to retrofit existing images or manifests. In this article, I will show how to: Install the Vault Secrets Operator (VSO) Configure the Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

Vault: JWT authentication mode with multiple roles to isolate secrets

In this article, I will detail how to use Vault JWT auth mode to isolate the secrets of two different deployments in the same Kubernetes cluster.  This will be done by using two different Kubernetes Service Accounts, each of which generates unique JWT that are tied to a different Vault role. JWT auth mode is Vault: JWT authentication mode with multiple roles to isolate secrets

Vault: NodeJS Express web app using node-vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a NodeJS Express web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the node-vault module. This Vault: NodeJS Express web app using node-vault to fetch secrets

Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a Java Spring Boot web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the Spring Cloud Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. It has tight integrations with Kubernetes that allows containers to fetch secrets without requiring hardcoding them into environment variables, files, or external services. The official docs already provide usage scenarios, so Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

GitLab: automated build and publish of multi-platform container image with GitLab pipeline

GitLab CI/CD pipelines can be used to automatically build and push Docker images to the GitLab Container Registry. Beyond building a simple image, in this article I will show how to define a workflow that builds and pushes a multi-platform image (amd64,arm64,arm32) with manifest index to the GitLab Container Registry.  This is enabled by using GitLab: automated build and publish of multi-platform container image with GitLab pipeline

Github: automated build and publish of multi-platform container image with Github Actions

Github Actions provide the ability to define a build workflow based on Github repository events.  The workflow steps are defined as yaml and can be triggered by various events, including a code push, branch, or tagging in the repository. In this article, I will show how to define workflow steps that build and push a Github: automated build and publish of multi-platform container image with Github Actions

Docker: building multi-platform images that use fat manifest list/index

Docker can build multi-platform images that use a manifest index (fat manifest list) by using the Docker buildx command with backing containerd runtime and QEMU for cross-platform emulation. Using a manifest index for multi-platform images simplifies application level orchestration by using the same name and version for all architectures.  For example: # same image name Docker: building multi-platform images that use fat manifest list/index

Docker: installing Docker CE on Ubuntu

Docker is a container platform that streamlines software delivery and provides isolation, scalability, and efficiency with less overhead than OS level virtualization. These instructions are taken from the official Docker for Ubuntu page, but I fine-tuned them per Ubuntu22+ standards.

GCP: Cloud Run with build trigger coming from remote GitHub repository

GCP build triggers can easily handle Continuous Deployment (CD) when the source code is homed in a Google Cloud Source repository.  But even if the system of record for your source is a remote GitHub repository, these same type of push and tag events can be consumed if you configure a connection and repository link. GCP: Cloud Run with build trigger coming from remote GitHub repository

Github: automated build and publish of containerized GoLang app with Github Actions

Github Actions provide the ability to define a build workflow based on Github repository events.  The workflow steps are defined as yaml and can be triggered by various events, including a code push, branch, or tagging in the repository. In this article I will detail the steps of creating a statically-linked GoLang binary that when Github: automated build and publish of containerized GoLang app with Github Actions

Github: automated build and publish of containerized Spring Boot app using GitHub Actions

Github Actions provide the ability to define a build workflow directly in Github.  The workflow steps are defined as yaml and can be triggered by various events, including a code push, branch, or tagging in the repository. In this article I will detail the steps of creating a simple Spring Boot web application that when Github: automated build and publish of containerized Spring Boot app using GitHub Actions

Github: locally invoked release process for a Gradle built Java Spring Boot project

The GitHub “Release” page for a repository can provide your consumers a convenient way to download a binary version of your software as well as track the latest changes and enhancements. In this article, I will show how to invoke a local release process for a Java Spring Boot jar built with Gradle.  A new Github: locally invoked release process for a Gradle built Java Spring Boot project

Java: build OCI compatible image for Spring Boot web app using jib

While working on your Spring Boot web application locally, gradle provides the ‘bootRun’ for a quick development lifecycle and ‘bootJar’ for packaging all the dependencies as a single jar deliverable. But for most applications these days, you will need this packaged into an OCI compatible (i.e. Docker) image for its ultimate deployment to an orchestrator Java: build OCI compatible image for Spring Boot web app using jib

Java: Creating Docker image for Spring Boot web app using gradle

While working on your Spring Boot web application locally, gradle provides the ‘bootRun’ for a quick development lifecycle and ‘bootJar’ for packaging all the dependencies as a single jar deliverable. But for most applications these days, you will need this packaged into an OCI compatible (i.e. Docker) image for its ultimate deployment to an orchestrator Java: Creating Docker image for Spring Boot web app using gradle

Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production. If you are running these types of workloads on a production Kubernetes cluster, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a series of previous articles, we Python: New Relic Agent for Gunicorn app deployed on Kubernetes

Python: New Relic instrumentation for Flask app deployed with Gunicorn

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production.  If you are running these types of workloads in production, you should consider an observability platform such a New Relic to ensure availability, service levels, and visibility into transactions and logging. In a previous article, we created a Docker image of Python: New Relic instrumentation for Flask app deployed with Gunicorn

Python: Building an image for a Flask app served from Gunicorn

Gunicorn is a WSGI HTTP server commonly used to run Flask applications in production.  Running Flask applications directly is great for development and testing of the basic request/response flow, but you need gunicorn to handle production level loads,  concurrency, logging, and timeouts. In this article, I will show you how to build a Docker image Python: Building an image for a Flask app served from Gunicorn

GCP: Enable Policy Controller on a GKE cluster

Anthos Policy Controller enables enforcement of compliance, security, and organizational policies on GKE clusters. These might be best-practice policies coming from internal Architectural standards, or technical policies used to define/constrain resources, or audit requirements stemming from legal regulation. Anthos Policy Controller is built upon the open-source Open Policy Agent (OPA) Gatekeeper, which uses a Kubernetes GCP: Enable Policy Controller on a GKE cluster

Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

Kubernetes has a rich way of expressing volumes/ volumeMounts for mounting files, emptyDir for ephemeral directories, and env/envFrom for adding environment variables to your container definition running on a Kubernetes cluster. However, if you are actively iterating on the development of an image, it may slow you down to require a deployment to a remote Kubernetes: volumeMount, emptyDir, and env equivalents during local Docker development

GCP: serving a maintenance page using an HTTPS LB and container native routing

No matter how highly available your services, there may still be significant backend events that require planned maintenance.  During this downtime, you should still reply to end users and service integrations with a proper response. In this article, I will show you how to configure your GCP HTTPS Loadbalancer so that a single maintenance service GCP: serving a maintenance page using an HTTPS LB and container native routing

Kubernetes: emptying the finalizers for a namespace that will not delete

If your intent is to delete all the objects in a namespace, but the command is not completing, emptying the namespace finalizer will often allow the deletion to finish. For example, if you have tried deleting the “my-namespace” like below and it will not complete. kubectl delete ns my-namespace –force –grace-period=0 Then as written by Kubernetes: emptying the finalizers for a namespace that will not delete

GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

It is not necessary to create an independent GCP HTTPS LB or other improvisation to redirect insecure HTTP traffic to your HTTPS load balancer.  The existing public Ingress can reference a FrontendConfig object that specifies redirection to HTTPS. Below is a FrontendConfig definition that can redirect the insecure traffic. apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: GCP: HTTP to HTTPS redirection using HTTPS LB Ingress

GCP: Private GKE cluster in Autopilot mode using Terraform

GKE Autopilot reduces the operational costs of managing GKE clusters by freeing you from node level maintenance, instead focusing just on pod workloads.  Costs are accrued based on pod resource consumption and not on node resource sizes or node count, which are managed by Google. Since you no longer own the node level, there are GCP: Private GKE cluster in Autopilot mode using Terraform

GCP: Private GKE Cluster with Anthos Service Mesh exposing services

As opposed to public GKE clusters which have their IP addresses exposed, private GKE clusters use private internal IP addresses.  This provides an enhanced security stance, but also means we need a solution such as Anthos Service Mesh to explicitly expose our services. In our previous article, we built a private GKE cluster using Terraform.  GCP: Private GKE Cluster with Anthos Service Mesh exposing services

Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

If you delete the entire nginx namespace and reinstall again via helm chart, your nginx admission controller may throw a “x509 certificate signed by unknown authority” message when you attempt to create an nginx ingress. This will happen regardless if the ingress is using http only or secure https.  And also whether or not the Kubernetes: ingress-nginx-controller-admission error, x509 certificate signed by unknown authority

Kubernetes: using kubectl to wait for condition of pods, deployments, services

Instead of deploying a pod or service and periodically checking its status for readiness, or having your automation scripts wait for a certain number of seconds before moving to the next operation, it is much cleaner to use ‘kubectl wait’ to sense completion. Wait for pod Here is how you would wait for READY status Kubernetes: using kubectl to wait for condition of pods, deployments, services

Helm: Installing Helm on Ubuntu

Update Aug 2023: using newer ‘signed-by’ attribute for apt signing keys. Installing Helm using apt is a straight-forward procedure and documented on the official site.  Coming straight from the official helm documentation, here are the commands for Ubuntu 22. curl https://baltocdn.com/helm/signing.asc | gpg –dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo chmod 644 /usr/share/keyrings/helm.gpg sudo Helm: Installing Helm on Ubuntu

Kubernetes: running a mail container for testing email during development

If you are in the development lifecycle and need to quickly test email functionality, you can deploy the codecentric/mailhog image directly within Kubernetes.  It will receive all email regardless of address, and from its web interface show you all the email that has been received. In this article, I will show you how to deploy Kubernetes: running a mail container for testing email during development

Kubernetes: NFS mount using dynamic volume and Storage Class

If you have an external NFS export and want to share that with a pod/deployment, you can leverage the nfs-subdir-external-provisioner  to create a storageclass that can dynamically create the persistent volume. In contrast to manually creating the persistent volume and persistent volume claim, this dynamic method cedes the lifecycle of the persistent volume over to Kubernetes: NFS mount using dynamic volume and Storage Class