ELK: Installing Logstash on Ubuntu 14.04

elastic-logstash-fwLogstash provides a powerful mechanism for listening to various input sources, filtering and extracting the fields, and then sending events to a persistence store like ElasticSearch.

Installing Logstash on Ubuntu is well documented, so in this article I will focus on Ubuntu specific steps required for Logstash 2.x and 5.x.

Continue reading “ELK: Installing Logstash on Ubuntu 14.04”

ELK: Using Ruby in Logstash filters

elastic-logstash-fwLogstash has a rich set of filters, and you can even write your own, but often this is not necessary since there is a out-of-the-box filter that allows you to embed Ruby code directly in the configuration file.

Using logstash-filter-ruby, you can use all the power of Ruby string manipulation to parse an exotic regular expression, an incomplete date format, write to a file, or even make a web service call.

Continue reading “ELK: Using Ruby in Logstash filters”

ELK: Architectural points of extension and scalability for the ELK stack

elasticsearch-logoThe ELK stack (ElasticSearch-Logstash-Kibana), is a horizontally scalable solution with multiple tiers and points of extension and scalability.

Because so many companies have adopted the platform and tuned it for their specific use cases, it would be impossible to enumerate all the novel ways in which scalability and availability had been enhanced by load balancers, message queues, indexes on distinct physical drives, etc… So in this article I want to explore the obvious extension points, and encourage the reader to treat this as a starting point in their own design and deployment.

Continue reading “ELK: Architectural points of extension and scalability for the ELK stack”

ELK: Feeding the logging pipeline

elasticsearch-logoThe most varied point in an ELK (Elasticsearch-Logstash-Kibana) stack is the mechanism by which custom events and logs will get sent to Logstash for processing.

Companies running Java applications with logging sent to log4j or SLF4J/Logback will have local log files that need to be tailed.  Applications running in containers may send everything to stdout/stderr, or have drivers for sending this on to syslog and other locations.  Network appliances tend to have SNMP or remote syslog outputs.

But regardless of the details, events must flow from their source to the Logstash indexing layer.  Doing this with maximized availability and scalability, and without putting excessive pressure on the Logstash indexing layer is the primary concern of this article.

Continue reading “ELK: Feeding the logging pipeline”

Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters

python-logoPython is a language whose advantages are well documented, and the fact that it has become ubiquitous on most Linux distributions  makes it well suited for quick scripting duties.

In this article I’ll go through an example of using Python to read entries from a JSON file, and from each of those entries create a local file.  We’ll use the Jinja2 templating language to generate each file from a base template.

Our particular example will be the generation of Logstash filters for log processing, but the techniques for using JSON to drive Python processing or Jinja2 templating within Python are general purpose.

Continue reading “Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters”

Syslog: Sending Java log4j2 to rsyslog on Ubuntu

log4j-logoLogging has always been a critical part of application development.  But the rise of OS virtualization, applications containers, and cloud-scale logging solutions has turned logging into something bigger that managing local debug files.

Modern applications and services are now expected to feed log aggregation and analysis stacks (ELK, Graylog, Loggly, Splunk, etc).  This can be done a multitude of ways, in this post I want to focus on modifying log4j2 so that it sends directly to an rsyslog server.

Even though we focus on sending to an Ubuntu ryslog server in this post, this could be any entity listening for syslog traffic, such as Logstash.

Continue reading “Syslog: Sending Java log4j2 to rsyslog on Ubuntu”

Logstash: Using metrics to debug the filtering process

elastic-logstash-fw When building your logstash filter, you would often like to validate your assumptions on a large sampling of input events without sending all the output to ElasticSearch.

Using Logstash metrics and conditionals, we can easily show:

  • How many input events were processed successfully
  • How many input events had errors
  • An error file containing each event that processed in error

This technique gives you the ability to track your success rate across a large input set, and then do a postmortem review of each event that failed.

I’ll walk you through a Logstash conf file that illustrates this concept.

Continue reading “Logstash: Using metrics to debug the filtering process”

Logstash: Testing Logstash grok patterns online

elastic-logstash-fwIn my previous posts, I have shown how to test grok patterns locally using Ruby on Linux and Windows.  This works well when your VM do not have full internet access, or only have console access, or any reason that you want to test it locally.

If you have access to a graphical web browser and the log file, there is a nice online grok constructor here and here. and by simply entering a sampling of the log lines and a grok pattern, you can verify that all the lines are parsed correctly.

Here is a small example to start you off:

Continue reading “Logstash: Testing Logstash grok patterns online”

Logstash: Testing Logstash grok patterns locally on Windows

elastic-logstash-fwIf the logs you are shipping to Logstash are from a Windows OS, it makes it even more difficult to quickly troubleshoot a grok pattern being sent to the Logstash service.

It can be beneficial to quickly validate your grok patterns directly on the Windows host.  Here is an easy way to test a log against a grok pattern:

Continue reading “Logstash: Testing Logstash grok patterns locally on Windows”