The ELK stack (ElasticSearch-Logstash-Kibana), is a horizontally scalable solution with multiple tiers and points of extension and scalability.
Because so many companies have adopted the platform and tuned it for their specific use cases, it would be impossible to enumerate all the novel ways in which scalability and availability had been enhanced by load balancers, message queues, indexes on distinct physical drives, etc… So in this article I want to explore the obvious extension points, and encourage the reader to treat this as a starting point in their own design and deployment.
Continue reading “ELK: Architectural points of extension and scalability for the ELK stack”
The most varied point in an ELK (Elasticsearch-Logstash-Kibana) stack is the mechanism by which custom events and logs will get sent to Logstash for processing.
Companies running Java applications with logging sent to log4j or SLF4J/Logback will have local log files that need to be tailed. Applications running in containers may send everything to stdout/stderr, or have drivers for sending this on to syslog and other locations. Network appliances tend to have SNMP or remote syslog outputs.
But regardless of the details, events must flow from their source to the Logstash indexing layer. Doing this with maximized availability and scalability, and without putting excessive pressure on the Logstash indexing layer is the primary concern of this article.
Continue reading “ELK: Feeding the logging pipeline”
If the logs you are shipping to Logstash are from a Windows OS, it makes it even more difficult to quickly troubleshoot a grok pattern being sent to the Logstash service.
It can be beneficial to quickly validate your grok patterns directly on the Windows host. Here is an easy way to test a log against a grok pattern:
Continue reading “Logstash: Testing Logstash grok patterns locally on Windows”