1
votes

I have a number of clusters running elasticsearch 5.2 and a tribe cluster for cross-cluster search which is all hosted on GCP. I also have an alias setup on the one index on one of the clusters.

In the process of making changes to the masters of the clusters, I have updated the tribe config on one of the tribe nodes and restarted elasticsearch service , the new config looks like this

cluster.name: name of the cluster
network.host: 0.0.0.0
cloud:
  gce:
    project_id: Project ID
    zone: [zones]

discovery:
  type: gce
  gce:
    tags: network-tag

tribe:
  blocks:
    write: true
    metadata: true
  cluster_1:
    cluster.name: cluster_1_name
    discovery.zen.ping.unicast.hosts: ["new_master_1.1", "new_master_1.2"]
  cluster_2:
    cluster.name: cluster_2_name
    discovery.zen.ping.unicast.hosts: ["new_master_2.1", "new_master_2.2"]

action.search.shard_count.limit: XXXX

Now when I try to run curl localhost:9200/alias/_search it says index not found exception but when I run curl localhost:9200/index_name/_search i get the expected output. The old tribe node that I still haven't updated the config for is working fine with both previous curl commands which is intriguing. The only difference in the config is the masters for the clusters.

So I don't know how to fix it. I appreciate all the help I can get in solving this issue.

Thanks a lot.

Edit: When I inspect the tribe log it discovers the index the cluster that does not belong to but it doesn't discover it for the cluster that owns it. I'm not sure how this can help identifying the issue and resolving it.

1

1 Answers

0
votes

I managed to fix it because there was a conflict of indices: two clusters had the same index name so when one of them adds it the tribe node fails to add the second one. All I had to do was add a prefer on conflict.