The heart of the ELK stack is Elasticsearch. In order to provide high availability and scalability, it needs to be deployed as a cluster with master and data nodes. The Elasticsearch cluster is responsible for both indexing incoming data as well as searches against that indexed data.
As described in the documentation, if there is one absolutely critical resource it is memory. Keeping the heap size less than 32G will allow you to use compressed object pointers which is preferred. Swapping memory takes a big hit, so minimize swappiness on your Linux host.
Indexing and merging are I/O intensive operations, so the second limiting factor is usually disk I/O, prefer SSD where possible.
Master nodes are responsible for cluster wide actions on indexes, tracking node membership, and shard distribution. Having multiple master nodes provides high availability.
But in order to avoid the “split-brain” problem with master nodes where more than one node believes it is the master, have at least 3 master nodes (1 would not provide HA, 2 would lead to split-brain). Typically, these master nodes are small because they are exclusively cluster masters and do not serve any data or query results. The number of masters can be set in elasticsearch.yml using the equation ‘number of master eligible nodes/2 +1‘
# 3 total masters; N/2 + 1 = 2 discovery.zen.minimum_master_nodes: 2
Both master and data nodes should be in the same subnet or datacenter, they are not designed to be geographically distributed. Replication is not meant to be done across distributed datacenters with higher than LAN latency. If you need federated search across distant clusters, you need to use tribe nodes which are described in the documentation, and in an article I wrote here.
Data nodes hold the shards that contain the indexed documents, and handle the CRUD and search operations which are memory and I/O intensive.
Increasing the number of data nodes allows for larger index sizes, longer retention, and better search speed depending on node utilization and shard/replica settings.
Index Shards and Replicas
Spreading shards across multiple nodes allow the index to be larger than what you could accomplish on a single host (horizontal scaling). Likewise, having the ability to run searches against replica shard can improve performance if you increase the number of nodes.
By default each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that each index will have a total of 10 shards (5 primary and then one replica of each primary). Having more replicas provides more redundancy, and means we can lose more nodes without losing data. Read here for a good explanation and diagrams for a fully replicated 3 node deployment with 2 replicas.
If you start with 3 physical nodes, but know that large growth is shortly coming, you should start with an overallocation of shards (let’s say 6), which will eventually spread across more nodes as you bring more online. Once the number is set at index creation, it requires a complete reindexing to change [1, 2,3].
One of the key values that can affect performance is an index’s ‘refresh_interval‘. If you do not need your incoming data to be immediately available for search, use 30 seconds or higher (default is 1 second). This can greatly improve performance as long as a small lag between insertion and searchability is acceptable.