DevOps

Docker: Sending Spring Boot logging to syslog

Building services using Spring Boot gives a development team a jump start on many production concerns, including logging.  But unlike a standard deployment where logging to a local file is where the developer’s responsibility typically ends, with Docker we must think about how to log to a public space outside our ephemeral container space. The Docker: Sending Spring Boot logging to syslog

Squid: Configuring an Ubuntu host to use a Squid proxy for internet access

Once you have a Squid proxy setup as described in my article here, the next challenge is configuring your Ubuntu servers so that they use this proxy by default instead of attempting direct internet connections. There are several entities we want using Squid by default: apt package manager, interactive consoles and wget/curl, and Java applications.

HAProxy: Using HAProxy for SSL termination on Ubuntu

HAProxy is a high performance TCP/HTTP (Level 4 and Level 7) load balancer and reverse proxy.  A common pattern is allowing HAProxy to be the fronting SSL-termination point, and then HAProxy determines which pooled backend server serves the request.

Nginx: Using Nginx for SSL termination on Ubuntu

Nginx is a popular reverse proxy and load balancer that focuses on level 7 (application) traffic.  A common pattern is allowing Nginx to be the fronting SSL-termination point, and then Nginx determines which pooled backend server is best available to serve the request.

Apache2: Enable LDAP authentication and SSL termination for Ubuntu

Some web applications leave authentication as an orthogonal concern to the application – not including any kind of login functionality and instead leaving authentication as an operational concern. When this happens, a reverse proxy that has an LDAP integration can act as an architectural sentry in front of the web application and also fulfills the Apache2: Enable LDAP authentication and SSL termination for Ubuntu

Jenkins: Setting up a continuous integration server on Ubuntu

Jenkins is the open-source automation server that is critical in building a continuous integration and delivery pipeline.  It is extensible and has a wealth of plugins that  integrate with numerous enterprise systems. Here are the detailed steps for installing a Jenkins server on Ubuntu.

Maven: Installing a 3rd party jar to a local or remote repository

Especially in enterprise application development, there can be 3rd party dependencies that are not available in public Maven repositories.  These may be internal, business specific libraries or licensed libraries that have limitations on usage. When this is the case, you can either publish to a private Maven repository that controls authorization or you can put Maven: Installing a 3rd party jar to a local or remote repository

Maven: Installing a private Maven repository on Ubuntu using Artifactory

An essential part of the standard build process for Java applications is having a set of repositories where project artifacts are stored. Artifact curation provides the ability to manage dependencies, quickly rollback releases, support compatibility of downstream projects, do QA promotion from test to production, support a continuous build pipeline, and provides auditability. JFrog puts Maven: Installing a private Maven repository on Ubuntu using Artifactory

AppDynamics: Enabling verbose debug logs for Agents

Enabling verbose logs for an AppDynamics machine or database agents can be invaluable for troubleshooting connectivity or network issues. Luckily, this is easily done by editing the conf/logging/log4j.xml file.  By default, only the error level messages are sent to the logs: <root> <priority value=”error”/> <appender-ref ref=”FileAppender”/> </root> But you can modify this so that debug AppDynamics: Enabling verbose debug logs for Agents

AppDynamics: Java Spring PetClinic and PostgreSQL configured for monitoring

As an exploration of AppDynamics’ APM functionality, you may find it useful to deploy a sample application that can quickly return back useful data.  The Java Spring PetClinic connecting back to a PostgreSQL database provides a simple code base that exercises both database and application monitoring. In a previous article, I went over the detailed AppDynamics: Java Spring PetClinic and PostgreSQL configured for monitoring

Selenium: Running headless automated tests on Ubuntu

Selenium is an open-source solution for automating the browser allowing you to run continuous integration tests, validate performance and scalability, and perform regression testing of web applications. This kind of automated testing is useful not only from desktop systems, but also from server machines where you may want to monitor availability or correctness of returned Selenium: Running headless automated tests on Ubuntu

AppDynamics: Java Spring PetClinic and MySQL configured for monitoring

As an exploration of AppDynamics’ APM functionality, you may find it useful to deploy a sample application that can quickly return back useful data.  The Java Spring PetClinic connecting back to a MySQL database provides a simple code base that exercises both database and application monitoring. We’ll deploy the Java Spring PetClinic unto Tomcat running AppDynamics: Java Spring PetClinic and MySQL configured for monitoring

AppDynamics: Installing a Machine Agent on Ubuntu 14.04

The AppDynamics Machine Agent is used not only to report back on basic hardware metrics (cpu/memory/disk/network), but also as the hook for custom plugins that can report back on any number of applications including: .NET, Apache, AWS, MongoDB, Cassandra, and many others. In this article, I’ll go over the details to install the Machine Agent AppDynamics: Installing a Machine Agent on Ubuntu 14.04

Grafana: Connecting to an ElasticSearch datasource

The ElasticSearch stack (ELK) is popular open-source solution that serves as both repository and search interface for a wide range of applications including: log aggregation and analysis, analytics store, search engine, and document processing. Its standard web front-end, Kibana, is a great product for data exploration and dashboards.  However, if you have multiple data sources Grafana: Connecting to an ElasticSearch datasource

Grafana: Connecting to a Zabbix datasource

Zabbix is an open-source monitoring solution that provides metrics collection, dynamic indexes, alerting, dashboards, and an API for external integration.  But graphing is arguably one Zabbix’s weak points; it still builds static images while other enterprise and consumer applications have set end users’ expectations for graph visualization and interactivity very high. Luckily, the Zabbix plugin Grafana: Connecting to a Zabbix datasource

Grafana: Installation on Ubuntu 14.04

Grafana is an open-source visualization suite that is able to generate graphs and dashboards, in addition to alerting. It is designed to retrieve data from various backends including: Graphite, ElasticSearch, Prometheus, and Zabbix. This article will lead you through an installation of the latest stable version on Ubuntu 14.04.

Zabbix: Alert to PagerDuty using Zabbix3

Having Zabbix send alert mails directly to user groups is typically outgrown as the system matures and the number of alerts increase, new lines of business and engineering groups are on-boarded, and on-call scheduling is implemented. If you already use PagerDuty for on-call scheduling, then it makes perfect sense to have Zabbix create incidents in Zabbix: Alert to PagerDuty using Zabbix3

VMware: Exporting from Oracle VirtualBox/Vagrant to vCloud Director

Oracle VirtualBox as a virtualization engine paired with Vagrant provides a cross-platform virtualization-agnostic workflow for Linux, Windows, and MacOS.  It is light enough to allow a developer to setup, test, and tear down virtual infrastructure as part of a unit test. You may find yourself in a position where you have built a VM in VMware: Exporting from Oracle VirtualBox/Vagrant to vCloud Director

ELK: Architectural points of extension and scalability for the ELK stack

The ELK stack (ElasticSearch-Logstash-Kibana), is a horizontally scalable solution with multiple tiers and points of extension and scalability. Because so many companies have adopted the platform and tuned it for their specific use cases, it would be impossible to enumerate all the novel ways in which scalability and availability had been enhanced by load balancers, ELK: Architectural points of extension and scalability for the ELK stack

ELK: Scaling an ElasticSearch Cluster

The heart of the ELK stack is Elasticsearch.  In order to provide high availability and scalability, it needs to be deployed as a cluster with master and data nodes.  The Elasticsearch cluster is responsible for both indexing incoming data as well as searches against that indexed data. Resources As described in the documentation, if there ELK: Scaling an ElasticSearch Cluster

ELK: Feeding the logging pipeline

The most varied point in an ELK (Elasticsearch-Logstash-Kibana) stack is the mechanism by which custom events and logs will get sent to Logstash for processing. Companies running Java applications with logging sent to log4j or SLF4J/Logback will have local log files that need to be tailed.  Applications running in containers may send everything to stdout/stderr, ELK: Feeding the logging pipeline

ELK: Federated Search with a Tribe node

Although the ELK stack has rich support for clustering, clustering is not supported over WAN connections due to Elasticsearch being sensitive to latency.  There are also practical concerns of network throughput given how much data some installations index on an hourly basis. So as nice as it would be to have a unified, eventually consistent ELK: Federated Search with a Tribe node

ELK: Pointing Kibana to a Client Node

Kibana is the end user web application that allows us to query Elasticsearch data and create dashboards that can be used for analysis and decision making. Although Kibana can be pointed to any of the nodes in your Elasticsearch cluster, the best way to distribute requests across the nodes is to use a non-master, non-data ELK: Pointing Kibana to a Client Node

SaltStack: Creating a ZooKeeper External Pillar using Python

SaltStack has the ability to create custom states, grains, and external pillars.  There is a long list of standard external pillars ranging from those which read from local JSON files, to those that pull from EC2, MongoDB, etcd, and MySQL. In this article, we will use Apache ZooKeeper as the storage facility for our SaltStack SaltStack: Creating a ZooKeeper External Pillar using Python

Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters

Python is a language whose advantages are well documented, and the fact that it has become ubiquitous on most Linux distributions  makes it well suited for quick scripting duties. In this article I’ll go through an example of using Python to read entries from a JSON file, and from each of those entries create a Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters

AppDynamics: Silent Install of Controller on Ubuntu and license directory

For full instructions on installing the AppDynamics Controller on Linux, see the official documentation.  However, when you get to the step for installing in silent mode, it can be confusing because although it shows you how to specify the path to a response file and the keys available, it does not give you a sample AppDynamics: Silent Install of Controller on Ubuntu and license directory

SaltStack: Setting a jinja2 variable from an inner block scope

When using jinja2 for SaltStack formulas you may be surprised to find that your global scoped variables do not have ability to be modified inside a loop.  Although this is counter intuitive given the scope behavior of most scripting languages it is unfortunately the case that a jinja2 globally scoped variable cannot be modified from SaltStack: Setting a jinja2 variable from an inner block scope

Syslog: Sending Java log4j2 to rsyslog on Ubuntu

Logging has always been a critical part of application development.  But the rise of OS virtualization, applications containers, and cloud-scale logging solutions has turned logging into something bigger that managing local debug files. Modern applications and services are now expected to feed log aggregation and analysis stacks (ELK, Graylog, Loggly, Splunk, etc).  This can be Syslog: Sending Java log4j2 to rsyslog on Ubuntu

Node.js: Packaging modules for offline deployment using npm-bundle

In a production environment, it is common to have restricted internet access on the production deployment hosts.  This means that using the standard ‘npm install’ and pulling modules from the registry.npmjs.org repository is not an option. Given the breadth of the dependency graph required for most modules, this packaging is something you want automated without Node.js: Packaging modules for offline deployment using npm-bundle

Ubuntu: Pre-Validate Network ACL and Firewall Connectivity with Netcat

Although virtualization has pushed a self-service culture for infrastructure, it is still common in production environments to need your  Network Operations team to open the required ports necessary for any new application deployment. So, while you may be able to create the base virtualized host, you can’t go much further without the network connectivity.  And Ubuntu: Pre-Validate Network ACL and Firewall Connectivity with Netcat

SaltStack: Troubleshooting Basic Network Connectivity of Minion on Ubuntu

When troubleshooting basic connectivity from your SaltStack minions to your Salt master, the first thing to remember is the basic flow – the minions initiate the connection to port 4505/4506 on the Salt master. With this in mind, if you have modified /etc/salt/minion so that the master is explicitly set and logs are set to SaltStack: Troubleshooting Basic Network Connectivity of Minion on Ubuntu