DevOps

Terraform: module for conditional include of related resources

If you have a set of resources in Terraform that are conditionally included based on the same criteria, instead of appending a “count/for_each” on every resource definition, consider refactoring them into a module. The conditional can then be placed on the module definition instead of polluting each resource definition. For example, if you had several Terraform: module for conditional include of related resources

Github: security scanning built into GitHub Actions image build

Github Actions provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there are custom actions available for running the Trivy container security scanner. In this article, I will show you how to modify your GitHub Action to run the Trivy security scanner against your image, and Github: security scanning built into GitHub Actions image build

GitLab: security scanning built into GitLab Pipelines image build

GitLab Pipelines provide the ability to define a build workflow, and for projects that are building an OCI (Docker) image, there is a convenient method for doing container security scanning as part of the build process. Include Container Scanning As described in the official documentation, add the following include to your .gitlab-ci.yml pipeline definition. include: GitLab: security scanning built into GitLab Pipelines image build

GitLab: URL shortcut to override pipeline variable values

GitLab pipelines are a convenient way to expose deployment/delivery tasks.  But with their rudimentary web UI for variable input, it can be challenging for users to populate the required list of variables. One way of making it more convenient for end-users is to provide them a URL pre-populated with the specific branch and pipeline variable GitLab: URL shortcut to override pipeline variable values

OpenTofu: installing OpenTofu on Debian/Ubuntu

Terraform has now been open-source and forked with the OpenTofu project.  The ‘tofu’ binary is a drop-in replacement for terraform, and this article will show you how to install on Debian/Ubuntu. After installation, we will then use the Debian/Ubuntu Alternatives concept to supersede existing calls to ‘terraform’ to instead invoke ‘tofu’. Setup OpenTofu apt repository OpenTofu: installing OpenTofu on Debian/Ubuntu

Terraform: external yaml file as a contribution model for outside teams

If you are using Terraform as a way to provide infrastructure/services to outside teams, your Terraform definitions and variables may initially be owned by your team alone, with all “tf apply” operations done by trusted internal group members at the CLI. But once there is a certain amount of maturity in the solution, the Terraform Terraform: external yaml file as a contribution model for outside teams

Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

The Vault Secrets Operator is a Vault integration that runs inside a Kubernetes cluster and synchronizes Vault-level secrets to Kubernetes-level secrets. This secret synchronization happens transparently to the running workloads, without any need to retrofit existing images or manifests. In this article, I will show how to: Install the Vault Secrets Operator (VSO) Configure the Vault: synchronizing secrets from Vault to Kubernetes using Vault Secrets Operator

Vault: JWT authentication mode with multiple roles to isolate secrets

In this article, I will detail how to use Vault JWT auth mode to isolate the secrets of two different deployments in the same Kubernetes cluster.  This will be done by using two different Kubernetes Service Accounts, each of which generates unique JWT that are tied to a different Vault role. JWT auth mode is Vault: JWT authentication mode with multiple roles to isolate secrets

Ansible: resolving error “Invalid callback for stdout specified: yaml”

If you are getting the following error when invoking an Ansible playbook or any of the Ansible related utilities: ERROR! Invalid callback for stdout specified: yaml This means Ansible is attempting to use the new YAML callback plugin, but cannot find the Ansible Galaxy community.general module.  This module is installed by the ‘ansible’ pip module, Ansible: resolving error “Invalid callback for stdout specified: yaml”

GitLab: Continuous Deployment with Agent for Kubernetes and GitLab pipeline

GitLab pipelines are frequently used for the building of binaries and publishing of images to container registries, but do not always follow through with Continuous Deployment to a live environment. One reason is that pipelines do not usually have access to the internal systems where these applications are meant to be deployed. In this article, GitLab: Continuous Deployment with Agent for Kubernetes and GitLab pipeline

GitLab: self-managed runner for CI/CD jobs on GCP VM instances

The globally shared set of GitLab runners for CI/CD jobs works well for building binaries, publishing images, and reaching out to publicly available endpoints for services and infrastructure building. But the ability to run a private, self-managed runner can grant pipelines entirely new levels of functionality on several fronts: Can communicate openly to private, internal GitLab: self-managed runner for CI/CD jobs on GCP VM instances

GitLab: automated build and publish of multi-platform container image with GitLab pipeline

GitLab CI/CD pipelines can be used to automatically build and push Docker images to the GitLab Container Registry. Beyond building a simple image, in this article I will show how to define a workflow that builds and pushes a multi-platform image (amd64,arm64,arm32) with manifest index to the GitLab Container Registry.  This is enabled by using GitLab: automated build and publish of multi-platform container image with GitLab pipeline

Helm: automated publishing of Helm repo with Github Actions

In a previous article, I described how to expose a Github source repo as a public Helm repository by enabling Github Pages and running the chart-releaser utility. In this article, I want to remove the manual invocation of the chart-releaser, and instead place that into an Github Actions workflow that automatically publishes changes to the Helm: automated publishing of Helm repo with Github Actions

Helm: manually publishing Helm repo on Github using chart-releaser

The only requirement for a public Helm chart repository is that it exposes a URL named “index.yaml”.   So by adding a file named “index.yaml” to source control and enabling Github Pages to serve the file over HTTPS, you have the minimal basis for a public Helm chart repository. The backing Chart content (.tgz) can also Helm: manually publishing Helm repo on Github using chart-releaser

Github: automated build and publish of multi-platform container image with Github Actions

Github Actions provide the ability to define a build workflow based on Github repository events.  The workflow steps are defined as yaml and can be triggered by various events, including a code push, branch, or tagging in the repository. In this article, I will show how to define workflow steps that build and push a Github: automated build and publish of multi-platform container image with Github Actions

Helm: discovering Helm chart releases installed into Kubernetes cluster

If you are administering a Kubernetes cluster that you have inherited or perhaps not visited in a while, then you may need to reacquaint yourself with: which Helm charts are installed into what namespaces, if there are chart updates available, and then what values were used for chart installation. Below are commands that can assist Helm: discovering Helm chart releases installed into Kubernetes cluster

Terraform: error removing module containing legacy provider block, ‘Provider configuration not present’

If you have just removed a module declaration from your Terraform configuration and now get a ‘Provider configuration not present’ error when running apply: Error: Provider configuration not present To work with module.mymodule_legacysyntax.null_resource.test_rs (orphan) its original provider configuration at module.mymodule_legacysyntax.provider[“registry.terraform.io/hashicorp/null”] is required, but it has been removed. This occurs when a provider configuration is removed Terraform: error removing module containing legacy provider block, ‘Provider configuration not present’

Ansible: resolving ‘could not initialize the preferred locale: unsupported locale setting’

If you are getting the following error when invoking ‘ansible’, ‘ansible-playbook’, ‘ansible-galaxy’ or any of the Ansible related utilities: ERROR: Ansible could not initialize the preferred locale: unsupported locale setting This means Ansible cannot find a locale ending in “.UTF-8”.  Check the currently installed locales: $ locale -a Then export the LC_ALL variable to one Ansible: resolving ‘could not initialize the preferred locale: unsupported locale setting’

Terraform: terraform_remote_state to pass values to other configurations

It would be uncommon to have one monolithic Terraform configuration for all the infrastructure in your organization.  More than likely, there are multiple groups and each has responsibility and ownership of certain components (e.g. networking, storage, authorization, Kubernetes). As an example, let’s say your responsibility is the Kubernetes cluster build. You may need the following Terraform: terraform_remote_state to pass values to other configurations

Terraform: fixing error “querying Cloud Storage failed: storage: bucket doesn’t exist”

If you are attempting to run “terraform init” with a Google Cloud Storage backend and get the following error: Error: Failed to get existing workspaces: querying Cloud Storage failed: storage: bucket doesn’t exist The first check should be that the Google Cloud Storage bucket indeed exists, using gsutil. project_id=myproject-123 gsutil ls -p $project_id If the Terraform: fixing error “querying Cloud Storage failed: storage: bucket doesn’t exist”

GKE: terraform lifecycle ‘ignore_changes’ to manage external changes to GKE cluster

As much as Terraform pushes to be the absolute system of record for resources it creates, often valid external processes are assisting in managing those same resources. Here are some examples of legitimate external changes: Other company-approved Terraform scripts applying labeling to resources in order to track ownership and costs Security teams modifying IAM roles GKE: terraform lifecycle ‘ignore_changes’ to manage external changes to GKE cluster

Terraform: migrate state from local to remote Google Cloud Storage bucket and back

In this article I will demonstrate how to take a Terraform configuration that is using a local state file and migrate its persistent state to a remote Google Cloud Storage bucket (GCS).  We will then perform the migration again, but this time to bring the remote state back to a local file. We will illustrate Terraform: migrate state from local to remote Google Cloud Storage bucket and back

Helm: Installing Helm on Ubuntu

Update Aug 2023: using newer ‘signed-by’ attribute for apt signing keys. Installing Helm using apt is a straight-forward procedure and documented on the official site.  Coming straight from the official helm documentation, here are the commands for Ubuntu 22. curl https://baltocdn.com/helm/signing.asc | gpg –dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null sudo chmod 644 /usr/share/keyrings/helm.gpg sudo Helm: Installing Helm on Ubuntu

Terraform: using update-alternatives to manage multiple terraform binaries

If you have multiple terraform projects, it can be necessary to support multiple versions of the terraform binary to match module and provider dependencies. Instead of creating a custom solution of binary copies and links, this can be done using the Alternatives concept which handles these symbolic links in a standard way using links in Terraform: using update-alternatives to manage multiple terraform binaries

RabbitMQ: Deleting a ghost queue that cannot be removed at the GUI/CLI

If you get a timeout/errors trying to delete a RabbitMQ queue from  the management dashboard or CLI with an error similar in syntax to below: failed to perform operation on queue ” in vhost ” due to timeout Then you can attempt deletion using rabbitmqctl to evaluate an Erlang expression: rabbitmqctl eval ‘Q = {resource, RabbitMQ: Deleting a ghost queue that cannot be removed at the GUI/CLI

SaltStack: salt-ssh for agentless automation on Ubuntu

Configuration Management tools like SaltStack are invaluable for managing infrastructure at scale.  Even in the growing world of containerization, there is the need for bulk automation. This article will detail installation of  Salt SSH which leverages the power of SaltStack without the requirements for an agent install.

SaltStack: Installing a Salt Master on Ubuntu Xenial

Configuration Management tools like SaltStack are invaluable for managing infrastructure at scale.  Even in the growing world of containerization where immutable image deployment is the norm, those images need to be built in a repeatable and auditable fashion. This article will detail installation of the SaltStack master on Ubuntu Xenial 16.04, with validation using a single Minion. Note that SaltStack: Installing a Salt Master on Ubuntu Xenial

Maven: Installing a private Maven repository on Ubuntu using Apache Archiva

An essential part of the standard build process for Java applications is having a repository where project artifacts are stored. Artifact curation provides the ability to manage dependencies, quickly rollback releases, support compatibility of downstream projects, do QA promotion from test to production, support a continuous build pipeline, and provides auditability. Archiva from the Apache Maven: Installing a private Maven repository on Ubuntu using Apache Archiva

SaltStack: Installing a Salt Master on Ubuntu 14.04

Configuration Management tools like SaltStack are invaluable for managing infrastructure at scale.  Even in the growing world of containerization where immutable image deployment is the norm, those images need to be built in a repeatable and auditable fashion. This article will detail installation of the SaltStack master on Ubuntu 14.04, with validation using a single Minion.  Note that Minion SaltStack: Installing a Salt Master on Ubuntu 14.04

ELK: Installing MetricBeat for collecting system and application metrics

ElasticSearch’s Metricbeat is a lightweight shipper of both system and application metrics that runs as an agent on a client host.  That means that along with standard cpu/mem/disk/network metrics, you can also monitor Apache, Docker, Nginx, Redis, etc. as well as create your own collector in the Go language. In this article we will describe installing ELK: Installing MetricBeat for collecting system and application metrics

ELK: ElastAlert for alerting based on data from ElasticSearch

ElasticSearch’s commercial X-Pack has alerting functionality based on ElasticSearch conditions, but there is also a strong open-source contender from Yelp’s Engineering group called ElastAlert. ElastAlert offers developers the ultimate control, with the ability to easily create new rules, alerts, and filters using all the power and libraries of Python.

ELK: Using Curator to manage the size and persistence of your index storage

The Curator product from ElasticSearch allows you to apply batch actions to your indexes (close, create, delete, etc.).  One specific use case is applying a retention policy to your indexes, deleting any indexes that are older than a certain threshold. Installation Start by installing Curator using apt and pip: $ sudo apt-get install python-pip -y ELK: Using Curator to manage the size and persistence of your index storage

VirtualBox: Installing VirtualBox and Vagrant on Ubuntu 14.04/16.04

Although container based engines such as Docker are highly popularized for newer application deployment – there will still be widespread use of OS virtualization engines for years to come. One of the most popular virtualization engines for development purposes is the open-source VirtualBox from Oracle.  This article will detail its installation on Ubuntu 14.04.