Bash: falling back to file autocompletion if errors introduced by program autocompletion

At the Bash command line interface, there is the concept of programmable completion and regular file/directory completion. This means that when you press the <TAB>, the alternatives can be provided by a custom program or the filesystem hierarchy. There is always the chance that a program may introduce undesirable behavior to your auto-completion, and if Bash: falling back to file autocompletion if errors introduced by program autocompletion

Github: security scanning built into GitHub Actions image build

Github Actions provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there are custom actions available for running the Trivy container security scanner. In this article, I will show you how to modify your GitHub Action to run the Trivy security scanner against your image, and Github: security scanning built into GitHub Actions image build

GitLab: security scanning built into GitLab Pipelines image build

GitLab Pipelines provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there is a convenient method for doing container security scanning as part of the build process. Include Container Scanning As described in the official documentation, add the following include to your .gitlab-ci.yml pipeline definition. include: GitLab: security scanning built into GitLab Pipelines image build

GCP: publishing and reading from Google PubSub Topic using Python client libraries

Google Pub/Sub is a managed messaging platform providing a scalable, asynchronous, loosely-coupled solution for communication between application entities. It centers around the concept of a Topic (queue).  A Publisher can put messages on the Topic, and a Subscriber can read messages from the Subscription on a Topic. In this article, I will first use the GCP: publishing and reading from Google PubSub Topic using Python client libraries

GCP: Installing KEDA on a GKE cluster with workload identity and testing Scalers

KEDA is an open-source event-driven autoscaler that greatly enhances the abilities of the standard HorizontalPodAutoscaler.  It can scale based on internal metrics as well as external Scaler sources. In this article, I will illustrate how to install KEDA on a GKE cluster that has Workload Identity enabled, and then how to configure KEDA scaling events GCP: Installing KEDA on a GKE cluster with workload identity and testing Scalers

GCP: historical log of GKE cluster and nodepool upgrades and scaling

Although the simple ‘gcloud container operations list‘ command is the easiest way to find recent upgrade events on your GKE cluster or nodepool, it returns only the recent events and does not provide a historical record. If you need to look at historical events, you can use Logs Explorer web UI or use the ‘gcloud GCP: historical log of GKE cluster and nodepool upgrades and scaling

Bash: calculating number of days till certificate expiration using openssl

The openssl utility can be used to show the details of a certificate, including its ‘Not After’ expiration date in string format.  This can be transformed into “how many days till expiration” with a bit of Bash date math. Create test certificate and key Using a line provided by Diego Woitasen for non-interactive self-signed certification Bash: calculating number of days till certificate expiration using openssl

GitLab: URL shortcut to override pipeline variable values

GitLab pipelines are a convenient way to expose deployment/delivery tasks.  But with their rudimentary web UI for variable input, it can be challenging for users to populate the required list of variables. One way of making it more convenient for end-users is to provide them a URL pre-populated with the specific branch and pipeline variable GitLab: URL shortcut to override pipeline variable values

OpenTofu: installing OpenTofu on Debian/Ubuntu

Terraform has now been open-source and forked with the OpenTofu project.  The ‘tofu’ binary is a drop-in replacement for terraform, and this article will show you how to install on Debian/Ubuntu. After installation, we will then use the Debian/Ubuntu Alternatives concept to supersede existing calls to ‘terraform’ to instead invoke ‘tofu’. Setup OpenTofu apt repository OpenTofu: installing OpenTofu on Debian/Ubuntu

Terraform: external yaml file as a contribution model for outside teams

If you are using Terraform as a way to provide infrastructure/services to outside teams, your Terraform definitions and variables may initially be owned by your team alone, with all “tf apply” operations done by trusted internal group members at the CLI. But once there is a certain amount of maturity in the solution, the Terraform Terraform: external yaml file as a contribution model for outside teams

yq: updating deeply nested elements

Mike Farah’s yq yaml processor has a a rich set of operators and functions for advanced usage.  In this article, I will illustrate how to update deeply nested elements in yaml.  This can be done for both known paths as well as arbitrarily deep paths. Sample yaml We will use the following yaml files to yq: updating deeply nested elements

yq: validate yaml syntax

Mike Farah’s yq yaml processor has a a full-featured validation command that is very detailed in its reporting, but the yaml specification itself is very lenient, which means yq may accept scenarios you did not expect (e.g. an empty file). yq -v file.yaml >/dev/null ; echo “final result = $?” Luckily, the yq tips-and-tricks section yq: validate yaml syntax

Ubuntu: pyenv for managing multiple Python versions and environments

Keeping the Ubuntu system-level Python version and modules independent from those desired at each project level is a difficult task best managed by a purpose-built tool. There are many solutions in the Python ecosystem, but one that stands out for simplicity is pyenv and pyenv-virtualenv. pyenv allows you to install and switch between different versions Ubuntu: pyenv for managing multiple Python versions and environments

Ubuntu: LLama2 model on Ubuntu using llama.cpp

It is relatively easy to experiment with a base LLama2 model on Ubuntu, thanks to llama.cpp written by Georgi Gerganov. The llama.cpp project provides a C++ implementation for running LLama2 models, and works even on systems with only a CPU (although performance would be significantly enhanced if using a CUDA-capable GPU).

Mac: LLama2 model on Apple Silicon and GPU using llama.cpp

It is relatively easy to experiment with a base LLama2 model on M family Apple Silicon, thanks to llama.cpp written by Georgi Gerganov. The llama.cpp project provides a C++ implementation for running LLama2 models, and takes advantage of the Apple integrated GPU to offer a performant experience (see M family performance specs).

minikube: installing minikube on Mac with secure TLS ingress

minikube makes it easy to spin up a local Kubernetes cluster on macOS, and adding an Ingress is convenient with its built-in Addons. In this article, I want to take it one step further and show how to expose the Ingress via TLS (secure https) using a custom key/certificate chain.

Mac: bare-metal virtualization on Apple Silicon with virtualbuddy

The Apple Virtualization Framework (AVF) provides the ability to run completely independent virtual machines on top of M family Apple Silicon. For example, you can run multiple versions of MacOS virtualized for validating an application or its dependencies against different environments.  Additionally, cloning an existing VM (with little cost thanks to APFS copy-on-write) allows you Mac: bare-metal virtualization on Apple Silicon with virtualbuddy

Mac: multiple Python versions/virtualenv with brew and pyenv

Although you could use brew to install Python directly, the cleaner way to manage Python versions and isolate Python virtual environments is by using pyenv and pyenv-virtualenv. pyenv allows you to install and switch between different versions of Python, while pyenv-virtualenv provides isolation of pip modules, for independence between projects.

Bash: fixing “Too many authentication failures” for ssh with private key authentication

If you are using ssh private/public keypair authentication, and get an almost immediate error like below: $ ssh -i id_rsa.pub myuser@a.b.c.d -p 22 Received disconnect from a.b.c.d port 22:2: Too many authentication failures Disconnected from a.b.c.d port 22 Then try again using the ‘IdentitiesOnly‘ option. ssh -o ‘IdentitiesOnly yes’ -i id_rsa.pub myuser@a.b.c.d -p 22 The Bash: fixing “Too many authentication failures” for ssh with private key authentication

Ubuntu: resolving systemd error, “Start request repeated too quickly”

If your systemd service is failing with the following error message: XXX.service: Start request repeated too quickly The first thing to do is fix any underlying issues.  Use ‘systemctl status <service>’, ‘journalctl -u <service>’, and search any log files produced by the service to understand why the service failed multiple times and exceeded its StartLimitBurst. Ubuntu: resolving systemd error, “Start request repeated too quickly”

Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

The Vault Secrets Operator is a Vault integration that runs inside a Kubernetes cluster and synchronizes Vault-level secrets to Kubernetes-level secrets. This secret synchronization happens transparently to the running workloads, without any need to retrofit existing images or manifests. In this article, I will show how to: Install the Vault Secrets Operator (VSO) Configure the Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

minikube: exposing a deployment using ingress with secure TLS

minikube makes it easy to spin up a local Kubernetes cluster, and adding an Ingress is convenient with its built-in Addons. In this article, I want to take it one step further and show how to use a custom key/certificate to expose a service using TLS (secure https).

Vault: JWT authentication mode with multiple roles to isolate secrets

In this article, I will detail how to use Vault JWT auth mode to isolate the secrets of two different deployments in the same Kubernetes cluster.  This will be done by using two different Kubernetes Service Accounts, each of which generates unique JWT that are tied to a different Vault role. JWT auth mode is Vault: JWT authentication mode with multiple roles to isolate secrets

Ansible: resolving error “Invalid callback for stdout specified: yaml”

If you are getting the following error when invoking an Ansible playbook or any of the Ansible related utilities: ERROR! Invalid callback for stdout specified: yaml This means Ansible is attempting to use the new YAML callback plugin, but cannot find the Ansible Galaxy community.general module.  This module is installed by the ‘ansible’ pip module, Ansible: resolving error “Invalid callback for stdout specified: yaml”

Vault: NodeJS Express web app using node-vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a NodeJS Express web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the node-vault module. This Vault: NodeJS Express web app using node-vault to fetch secrets

Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. In this article, I will show how a Java Spring Boot web application deployed into a Kubernetes cluster can fetch a secret directly from the Vault server using the Spring Cloud Vault: Spring Boot web app using Spring Cloud Vault to fetch secrets

Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

HashiCorp Vault is a secret and encryption management system that allows your organization to secure sensitive information such as API keys, certificates, and passwords. It has tight integrations with Kubernetes that allows containers to fetch secrets without requiring hardcoding them into environment variables, files, or external services. The official docs already provide usage scenarios, so Vault: HashiCorp Vault deployed into Kubernetes cluster for secret management

Bash: fixing SSH authentication error “bad ownership or modes for file/directory”

If ssh private/public keypair authentication is failing, check the logs on the server side for permission errors.  On Debian/Ubuntu check for these errors in “/var/log/auth.log”. # error if authorized_keys file has too wide a permission for others Authentication refused: bad ownership or modes for file /home/myuser/.ssh/authorized_keys # error if .ssh directory has too wide a Bash: fixing SSH authentication error “bad ownership or modes for file/directory”

GitLab: Continuous Deployment with Agent for Kubernetes and GitLab pipeline

GitLab pipelines are frequently used for the building of binaries and publishing of images to container registries, but do not always follow through with Continuous Deployment to a live environment. One reason is that pipelines do not usually have access to the internal systems where these applications are meant to be deployed. In this article, GitLab: Continuous Deployment with Agent for Kubernetes and GitLab pipeline

GitLab: self-managed runner for CI/CD jobs on GCP VM instances

The globally shared set of GitLab runners for CI/CD jobs works well for building binaries, publishing images, and reaching out to publicly available endpoints for services and infrastructure building. But the ability to run a private, self-managed runner can grant pipelines entirely new levels of functionality on several fronts: Can communicate openly to private, internal GitLab: self-managed runner for CI/CD jobs on GCP VM instances