ELK: Installing Logstash on Ubuntu 14.04

elastic-logstash-fwLogstash provides a powerful mechanism for listening to various input sources, filtering and extracting the fields, and then sending events to a persistence store like ElasticSearch.

Installing Logstash on Ubuntu is well documented, so in this article I will focus on Ubuntu specific steps required for Logstash 2.x and 5.x.

Continue reading “ELK: Installing Logstash on Ubuntu 14.04”

ELK: Using Ruby in Logstash filters

elastic-logstash-fwLogstash has a rich set of filters, and you can even write your own, but often this is not necessary since there is a out-of-the-box filter that allows you to embed Ruby code directly in the configuration file.

Using logstash-filter-ruby, you can use all the power of Ruby string manipulation to parse an exotic regular expression, an incomplete date format, write to a file, or even make a web service call.

Continue reading “ELK: Using Ruby in Logstash filters”

Zabbix: Accessing Zabbix using the py-zabbix Python module

The open-source Zabbix monitoring solution has a REST API that provides the ability for deep integrations with your existing monitoring, logging, and alerting systems.

This fosters development of community-driven modules like the py-zabbix Python module, which is an easy way to automate Zabbix as well as send/retrieve metrics.

Continue reading “Zabbix: Accessing Zabbix using the py-zabbix Python module”

Docker: logspout for Docker log collection

Docker log collection can be done using various methods, one method that is particularly effective is having a dedicated container whose sole purpose is to automatically sense other deployed containers and aggregate their log events.

This is the architectural model of logspout, an open-source project that acts as a router for the stdout/stderr logs of other containers.

If you do not have docker installed yet, see my article here.  Before moving on, you should be able to run the hello-world container.

Continue reading “Docker: logspout for Docker log collection”

ELK: Running ElastAlert as a service on Ubuntu 14.04

ElastAlert from the Yelp Engineering group provides a very flexible platform for alerting on conditions coming from ElasticSearch.

In a previous article I fully describe running interactively on an Ubuntu server, and now I’ll expand on that by running it at system startup using a System-V init script.

One of the challenges of getting ElastAlert to run as a service is that is has  a very strict set of module requirements that very easily conflicts with other Python applications, and so we will use Python’s virtualenv to build it in isolation and then call that wrapper from the service script.

Continue reading “ELK: Running ElastAlert as a service on Ubuntu 14.04”

ELK: Installing MetricBeat for collecting system and application metrics

ElasticSearch’s Metricbeat is a lightweight shipper of both system and application metrics that runs as an agent on a client host.  That means that along with standard cpu/mem/disk/network metrics, you can also monitor Apache, Docker, Nginx, Redis, etc. as well as create your own collector in the Go language.

In this article we will describe installing Metricbeat 5.x on Ubuntu when the back end ElasticSearch version is either 5.x or 2.x.

Continue reading “ELK: Installing MetricBeat for collecting system and application metrics”

ELK: ElastAlert for alerting based on data from ElasticSearch

ElasticSearch’s commercial X-Pack has alerting functionality based on ElasticSearch conditions, but there is also a strong open-source contender from Yelp’s Engineering group called ElastAlert.

ElastAlert offers developers the ultimate control, with the ability to easily create new rules, alerts, and filters using all the power and libraries of Python.

Continue reading “ELK: ElastAlert for alerting based on data from ElasticSearch”

Zabbix: Installing a Zabbix Agent on Ubuntu 14.04

The open-source Zabbix monitoring solution has very lightweight agents that are easy to install on Ubuntu.

Although the Ubuntu main repository has a build available, it is older and so we are going to choose to download and install the latest point version in this article.  Unfortunately, the repo.zabbix.com cannot be added directly as an Ubuntu repository source.

Continue reading “Zabbix: Installing a Zabbix Agent on Ubuntu 14.04”

ELK: ElasticDump and Python to create a data warehouse job

By nature, the amount of data collected in your ElasticSearch instance will continue to grow and at some point you will need to prune or warehouse indexes so that your active collections are prioritized.

ElasticDump can assist in moving your indexes either to a distinct ElasticSearch instance that is setup specifically for long term data, or exporting the data as json for later import into a warehouse like Hadoop.  ElasticDump does not have a special filter for time based indexes (index-YYYY.MM.DD), so you must specify exact index names.

In this article we will use Python to query a source ElasticSearch instance (an instance meant for near real-time querying, keeps minimal amount of data), and exports any indexes from the last 14 days into a target ElasticSearch instance (an instance meant for data warehousing, has more persistent storage and users expect multi-second query times).

Continue reading “ELK: ElasticDump and Python to create a data warehouse job”

ELK: Using Curator to manage the size and persistence of your index storage

The Curator product from ElasticSearch allows you to apply batch actions to your indexes (close, create, delete, etc.).  One specific use case is applying a retention policy to your indexes, deleting any indexes that are older than a certain threshold.

Installation

Start by installing Curator using apt and pip:

$ sudo apt-get install python-pip -y

$ sudo pip install elasticsearch-curator

$ /usr/local/bin/curator --version

Continue reading “ELK: Using Curator to manage the size and persistence of your index storage”

VirtualBox: Installing VirtualBox and Vagrant on Ubuntu 14.04/16.04

Although container based engines such as Docker are highly popularized for newer application deployment – there will still be widespread use of OS virtualization engines for years to come.

One of the most popular virtualization engines for development purposes is the open-source VirtualBox from Oracle.  This article will detail its installation on Ubuntu 14.04.

Continue reading “VirtualBox: Installing VirtualBox and Vagrant on Ubuntu 14.04/16.04”

Docker: Sending Spring Boot logging to syslog

Building services using Spring Boot gives a development team a jump start on many production concerns, including logging.  But unlike a standard deployment where logging to a local file is where the developer’s responsibility typically ends, with Docker we must think about how to log to a public space outside our ephemeral container space.

The Docker logging drivers capture all the output from a container’s stdout/stderr, and can send a container’s logs directly to most major logging solutions (syslog, Logstash, gelf, fluentd).

As an added benefit, by making the logging implementation a runtime choice for the container, it provides flexibility to use a simpler implementation during development but a highly-available, scalable logging solution in production.

Continue reading “Docker: Sending Spring Boot logging to syslog”

Spring: Spring Boot with SLF4J/Logback sending to syslog

The Spring framework provides a proven and well documented model for the development of custom projects and services. The Spring Boot project takes an opinionated view of building production Spring applications, which favors convention over configuration.

In this article we will explore how to configure a Spring Boot project to use the Simple Logging Facade for Java (SLF4J) with a Logback backend to send log events to the console, filesystem, and syslog.

Continue reading “Spring: Spring Boot with SLF4J/Logback sending to syslog”

Docker: Installing Docker CE on Ubuntu 14.04 and 16.04

Docker is a container platform that streamlines software delivery and provides isolation, scalability, and efficiency with less overhead than OS level virtualization.

These instructions are taken directly from the official Docker for Ubuntu page, but I wanted to reiterate those tasks essential for installing the Docker Community Edition on Ubuntu 14.04 and 16.04.

Continue reading “Docker: Installing Docker CE on Ubuntu 14.04 and 16.04”

Squid: Configuring an Ubuntu host to use a Squid proxy for internet access

Once you have a Squid proxy setup as described in my article here, the next challenge is configuring your Ubuntu servers so that they use this proxy by default instead of attempting direct internet connections.

There are several entities we want using Squid by default: apt package manager, interactive consoles and wget/curl, and Java applications.

Continue reading “Squid: Configuring an Ubuntu host to use a Squid proxy for internet access”

Squid: Controlling network access using Squid and whitelisted domains

Having your production servers go through a proxy like Squid for internet access can be an architectural best practice that provides network security as well as caching efficiencies.

For further security, denying access to all requests but an explicit whitelist of domains provides auditable control.

Continue reading “Squid: Controlling network access using Squid and whitelisted domains”

HAProxy: Using HAProxy for SSL termination on Ubuntu

HAProxy is a high performance TCP/HTTP (Level 4 and Level 7) load balancer and reverse proxy.  A common pattern is allowing HAProxy to be the fronting SSL-termination point, and then HAProxy determines which pooled backend server serves the request.

Continue reading “HAProxy: Using HAProxy for SSL termination on Ubuntu”

Nginx: Using Nginx for SSL termination on Ubuntu

Nginx is a popular reverse proxy and load balancer that focuses on level 7 (application) traffic.  A common pattern is allowing Nginx to be the fronting SSL-termination point, and then Nginx determines which pooled backend server is best available to serve the request.

Continue reading “Nginx: Using Nginx for SSL termination on Ubuntu”

Apache2: Enable LDAP authentication and SSL termination for Ubuntu

Some web applications leave authentication as an orthogonal concern to the application – not including any kind of login functionality and instead leaving authentication as an operational concern.

When this happens, a reverse proxy that has an LDAP integration can act as an architectural sentry in front of the web application and also fulfills the requirements for Single Sign-On.  Apache2 serves this purpose very well with minimal overhead.

Continue reading “Apache2: Enable LDAP authentication and SSL termination for Ubuntu”

Ubuntu: Creating a self-signed certificate using OpenSSL on Ubuntu

There are numerous articles I’ve written  where a self-signed certificate is a prerequisite for deploying a piece of infrastructure.

Here are the quick steps for installing a self-signed certificate on an Ubuntu server.

Some of you will want a full explanation of the steps required, others will only want to run the script I’m putting on github.

Continue reading “Ubuntu: Creating a self-signed certificate using OpenSSL on Ubuntu”

Jenkins: Setting up a continuous integration server on Ubuntu

Jenkins is the open-source automation server that is critical in building a continuous integration and delivery pipeline.  It is extensible and has a wealth of plugins that  integrate with numerous enterprise systems.

Here are the detailed steps for installing a Jenkins server on Ubuntu.

Continue reading “Jenkins: Setting up a continuous integration server on Ubuntu”

Maven: Installing a 3rd party jar to a local or remote repository

Especially in enterprise application development, there can be 3rd party dependencies that are not available in public Maven repositories.  These may be internal, business specific libraries or licensed libraries that have limitations on usage.

When this is the case, you can either publish to a private Maven repository that controls authorization or you can put them into your local cached maven repository.

Continue reading “Maven: Installing a 3rd party jar to a local or remote repository”

Maven: Installing a private Maven repository on Ubuntu using Artifactory

An essential part of the standard build process for Java applications is having a set of repositories where project artifacts are stored.

Artifact curation provides the ability to manage dependencies, quickly rollback releases, support compatibility of downstream projects, do QA promotion from test to production, support a continuous build pipeline, and provides auditability.

JFrog puts out an open-source Maven server called Artifactory that is perfect for setting up a private Maven repository for internal applications.

Continue reading “Maven: Installing a private Maven repository on Ubuntu using Artifactory”

Monitoring: Java JMX exploration from the console using jmxterm

Java JMX (Java Management Extensions) is a standardized way of monitoring Java based applications.  The managed resources (MBeans) are defined and exposed by the JVM, application server, and application – and offer a view into these layers that can provide invaluable monitoring data.

But in order to report back the JMX data you must know the fully expanded path of the MBean and it’s available attributes/operations.  If you are on a desktop, tools like jsonsole provide a nice GUI interface for drilling down into the MBean hierarchy.  But, if you are in a server environment and JMX is not enabled for remote access on a desktop, you may need a console alternative.

An open-source project call jmxterm comes packaged as a single uber jar that makes it easy to enumerate and explore the available MBean exposed in a Java based application.

Continue reading “Monitoring: Java JMX exploration from the console using jmxterm”

Ubuntu: Using strace to get a view into file and network activity of a process

strace is a handy utility for tracing system, file, and network calls on a Linux system.  It can produce trace output for either an already running process, or it can create a new process.

Some of the most common troubleshooting scenarios are needing to isolate either the network or file system activity of a process.  For example to determine whether an application was attempting to reaching out to a server on the expected port, or to understand why a startup configuration file was not being read from the expected directory.

Continue reading “Ubuntu: Using strace to get a view into file and network activity of a process”

Ubuntu: Using tcpdump for analysis of network traffic and port usage

tcpdump comes standard on Ubuntu servers and is an invaluable tool in determining traffic coming in and out of a host.

As network infrastructures have become more complex and security conscious, validating network flow from client hosts through potentially multiple proxies and ultimately to a destination host and port has become more important than ever.

Let me list a few of the more common use cases.

Continue reading “Ubuntu: Using tcpdump for analysis of network traffic and port usage”

Nginx: Custom access log format and error levels

Nginx is a powerful application level proxy server.  Whether for troubleshooting or analysis, enabling log levels and custom formats for the access/error logs is a common requirement.

Error Logs

By default, only messages in the error category are logged.  If you want to enable more details, then modify nginx.conf like:

error_log file [level]

Enabling debug level on Linux would usually look like:

error_log /var/log/nginx/error.log debug;

Access Logs

Access logs and their format are also customized in nginx.conf.  By default, if no format is specified then the combined format is used.

access_log file [format]

Continue reading “Nginx: Custom access log format and error levels”

AppDynamics: Enabling verbose debug logs for Agents

Enabling verbose logs for an AppDynamics machine or database agents can be invaluable for troubleshooting connectivity or network issues.

Luckily, this is easily done by editing the conf/logging/log4j.xml file.  By default, only the error level messages are sent to the logs:

<root>
  <priority value="error"/>
  <appender-ref ref="FileAppender"/>
</root>

But you can modify this so that debug level is sent:

<root>
  <priority value="debug"/>
  <appender-ref ref="FileAppender"/>
</root>

Continue reading “AppDynamics: Enabling verbose debug logs for Agents”

OpenWrt: Upgrading OpenWrt to the latest snapshot build

Although stables releases of OpenWrt come out every 6 to 12 months, the automatically built snapshots offer a way to embrace the latest features, patches, and  security fixes without waiting that long.

A sysupgrade procedure works by saving the configuration files from known locations, deleting the entire filesystem, installing the new version of OpenWrt,  and then restoring the configuration files.

This is usually painless, but there can be issues if configuration changes have been made in non-standard file locations and are not saved.  Additionally, custom packages do not survive the sysupgrade and have to be reinstalled (to ensure compatibility with the kernel) and their new configurations must be manually merged.

Continue reading “OpenWrt: Upgrading OpenWrt to the latest snapshot build”