It is very common to have Logstash create time-based indexes in ElasticSearch that fit the format, <indexName>-YYYY.MM.DD. This means events submitted with @timestamp for that day all go to the same index.
However, if you do not explicitly specify an index template that maps each field to a type, you can end up with unexpected query results. The reason is that without explicit mappings, the index (that is created fresh each day) uses its best judgement to assign field types based on the first event inserted.
In this article, I’ll show you how to create explicit custom index templates so that field types are uniform across your time-series indexes.
Continue reading “ELK: Custom template mappings to force field types”
ElasticSearch’s Metricbeat is a lightweight shipper of both system and application metrics that runs as an agent on a client host. That means that along with standard cpu/mem/disk/network metrics, you can also monitor Apache, Docker, Nginx, Redis, etc. as well as create your own collector in the Go language.
In this article we will describe installing Metricbeat 5.x on Ubuntu when the back end ElasticSearch version is either 5.x or 2.x.
Continue reading “ELK: Installing MetricBeat for collecting system and application metrics”
Although the ELK stack has rich support for clustering, clustering is not supported over WAN connections due to Elasticsearch being sensitive to latency. There are also practical concerns of network throughput given how much data some installations index on an hourly basis.
So as nice as it would be to have a unified, eventually consistent cluster span across your North America and European datacenters, that is not currently a possibility. Across availability zones in the same AWS datacenter will work, but not across different regions.
But first let’s consider why we want a distributed Elasticsearch cluster in the first place. It is not typically for geo failover or disaster recovery (because we can implement that separately in each datacenter), but more often because we want end users to have a federated search experience.
We want end users to go to a single Kibana instance, regardless of which cluster they want to search, and be able to execute a search query against the data. A Tribe node can bridge two distinct clusters for this purpose.
Continue reading “ELK: Federated Search with a Tribe node”
Kibana is the end user web application that allows us to query Elasticsearch data and create dashboards that can be used for analysis and decision making.
Although Kibana can be pointed to any of the nodes in your Elasticsearch cluster, the best way to distribute requests across the nodes is to use a non-master, non-data Client node. Client nodes have the following properties set in elasticsearch.yml:
Continue reading “ELK: Pointing Kibana to a Client Node”