ELK: Federated Search with a Tribe node

elasticsearch-logoAlthough the ELK stack has rich support for clustering, clustering is not supported over WAN connections due to Elasticsearch being sensitive to latency.  There are also practical concerns of network throughput given how much data some installations index on an hourly basis.

So as nice as it would be to have a unified, eventually consistent cluster span across your North America and European datacenters, that is not currently a possibility.  Across availability zones in the same AWS datacenter will work, but not across different regions.

Federated Search

But first let’s consider why we want a distributed Elasticsearch cluster in the first place.  It is not typically for geo failover or disaster recovery (because we can implement that separately in each datacenter), but more often because we want end users to have a federated search experience.

We want end users to go to a single Kibana instance, regardless of which cluster they want to search, and be able to execute a search query against the data.  A Tribe node can bridge two distinct clusters for this purpose.

In order to get a federated search across multiple Elasticsearch clusters, we need to point Kibana at a Tribe node, much like we point Kibana at a Client node in a single cluster model.

Tribe Node

A tribe node is similar to a client node, because it joins a cluster but has no master or data responsibilities.  But unlike other nodes, a tribe node can be a member of multiple clusters, giving it the ability to execute read and write operations against multiple clusters that may be geographically distributed.


There are limitations, however.  A tribe node needs unique index names between clusters.  If there is an ‘applog’ index in one cluster, it cannot see the ‘applog’ index in another cluster.  If you know an index will be used from a tribe node, you should adopt a naming syntax. For example, ‘us_applog’ and ‘eu_applog’ would be appropriate names for the indexes in your American and European cluster.

Now, when a user logged into Kibana, they could select the ‘eu_applog’ index if they wanted to search European application logs, and select ‘us_applog’ for the American application logs.

It is unfortunate that the Kibana interface cannot understand that these two indexes are field compatible and search across them both at the same time, but that is something you have to live with for now.  Elasticsearch has a public API if you want to create your own application that recognizes a concept like this.


To setup an Elasticsearch node as a Tribe node, modify elasticsearch.yml with the list of clusters it should join.  In the example below, we will pretend we have two geographically distributed clusters named ‘us-cluster’ and ‘eu-cluster’ located in America and Europe respectively.

# settings for this node
node.name: mytribenode
cluster.name: tribecluster
node.master: false
node.data: false

# could not get tribe node working with on ES 2.4
# specify your host IP explicitly
disocovery.zen.ping.unicast.hosts: ['','localhost']

# tribe settings for this node
  on_conflict: prefer_t1
    cluster.name: us-cluster
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ['usm1','usm2','usm3']
    cluster.name: eu-cluster
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ['eum1','eum2','eum3']

You will not want multicast discovery for WAN/distributed connections, which is why we specify unicast to the master nodes in the US and EU datacenters.

Restarting the Elasticsearch service will have this tribe node join as a member of both clusters.  Now searches can be invoked against indexes in either cluster.  As mentioned earlier, remember that a tribe node needs unique index names, if it finds an index of the same name in both, it will use the ‘on_conflict’ cluster.


As described in my article on pointing Kibana to a Client node, pointing Kibana to a tribe node works the same way.  A tribe node is very light on resources since it does not have master or data duties, and can be installed on the same host that will run Kibana.  So in kibana.yml, point the elasticsearch_url key to the IP specified as the ‘network.host’ earlier from elasticsearch.yml

elasticsearch_url: ""

Kibana will create an index named ‘.settings’ in the on_conflict cluster to store its application specific settings.

If you cannot pull up visualizations or the dashboard and see error messages in the elasticsearch log of the tribal node, it is likely an issue with the “.kibana” index not being able to be created on the tribe node (which is not a master or data node). In this case, change elasticsearch.url of ‘/opt/kibana/config/kibana.yml’ so that it points directly at one of the masters of the on_conflict cluster and restart Kibana.  Then create at least one saved search, visualization, and dashboard which will be saved to the on_conflict cluster.  Then change back the elasticsearch.url to the tribe node, and now the on_conflict parameter will ensure that the “.kibana” index comes from that cluster.










Kibana 4 with tribe node MasterNotDiscoveredException


Kibana 4 with tribe node MasterNotDiscoveredException