0
votes

I'm trying to setup a separate cluster (kibanacluster) for monitoring my primary elasticsearch cluster (marveltest). Below are the ES, Marvel and Kibana versions I'm using. The ES version is fixed for the moment. I can update or downgrade the other components if needed.

  • kibana-4.4.1
  • elasticsearch-2.2.1
  • marvel-agent-2.2.1

The monitoring cluster and Kibana are both running in the host 192.168.2.124 and the primary cluster is running in a separate host 192.168.2.116.

192.168.2.116: elasticsearch.yml

marvel.agent.exporter.es.hosts: ["192.168.2.124"]
marvel.enabled: true
marvel.agent.exporters:

id1:
    type: http
    host: ["http://192.168.2.124:9200"]

Looking at the DEBUG logs in the monitoring cluster i can see data is coming from the primary cluster but is getting "filtered" since the cluster name is different.

[2016-07-04 16:33:25,144][DEBUG][transport.netty ] [nodek] connected to node [{#zen_unicast_2#}{192.168.2.124}{192.168.2.124:9300}]

[2016-07-04 16:33:25,144][DEBUG][transport.netty ] [nodek] connected to node [{#zen_unicast_1#}{192.168.2.116}{192.168.2.116:9300}]

[2016-07-04 16:33:25,183][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]

[2016-07-04 16:33:26,533][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]

[2016-07-04 16:33:28,039][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]

[2016-07-04 16:33:28,040][DEBUG][transport.netty ] [nodek] disconnecting from [{#zen_unicast_2#}{192.168.2.124}{192.168.2.124:9300}] due to explicit disconnect call [2016-07-04 16:33:28,040][DEBUG][discovery.zen ] [nodek] filtered ping responses: (filter_client[true], filter_data[false]) --> ping_response{node [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}], id[42], master [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}], hasJoinedOnce [true], cluster_name[kibanacluster]}

[2016-07-04 16:33:28,053][DEBUG][transport.netty ] [nodek] disconnecting from [{#zen_unicast_1#}{192.168.2.116}{192.168.2.116:9300}] due to explicit disconnect call [2016-07-04 16:33:28,057][DEBUG][transport.netty ] [nodek] connected to node [{nodek}{vQ-Iq8dKSz26AJUX77Ncfw}{192.168.2.124}{192.168.2.124:9300}]

[2016-07-04 16:33:28,117][DEBUG][discovery.zen.publish ] [nodek] received full cluster state version 32 with size 5589

1
What is the unicast configuration in both clusters?Andrei Stefan

1 Answers

0
votes

The issue is that you are mixing the use of Marvel 1.x settings with Marvel 2.2 settings, but also your other configuration seems to be off as Andrei pointed out in the comment.

marvel.agent.exporter.es.hosts: ["192.168.2.124"]

This isn't a setting known to Marvel 2.x. And depending on your copy/paste, it's also possible that the YAML is malformed due to whitespace:

marvel.agent.exporters:

id1:
    type: http
    host: ["http://192.168.2.124:9200"]

This should be:

marvel.agent.exporters:
  id1:
    type: http
    host: ["http://192.168.2.124:9200"]

As Andrei was insinuating, you have likely added the production node(s) to your discovery.zen.ping.unicast.hosts, which attempts to join it with their cluster. I suspect you can just delete that setting altogether in your monitoring cluster.

[2016-07-04 16:33:26,533][DEBUG][discovery.zen.ping.unicast] [nodek] [1] filtering out response from {node1}{Rmgg0Mw1TSmIpytqfnFgFQ}{192.168.2.116}{192.168.2.116:9300}, not same cluster_name [marveltest]

This indicates that it's ignoring a node that it is connecting too because the other node (node1) isn't in the same cluster.


To setup a Separate Monitoring cluster, it's pretty straight forward, but it requires understanding the moving parts first.

  1. You need a separate cluster with at least one node (most people get by with one node).
    • This separate cluster effectively has no knowledge about the cluster(s) it monitors. It only receives data.
  2. You need to send the data from the production cluster(s) to that separate cluster.
  3. The monitoring cluster interprets that the data using Kibana + Marvel UI plugin to display charts.

So, what you need:

  • Your production cluster needs to install marvel-agent on each node.
  • Each node needs to configure the exporter(s):

This is the same as you had before:

marvel.agent.exporters:
  id1:
    type: http
    host: ["http://192.168.2.124:9200"]
  • Kibana should talk to the monitoring cluster (192.168.2.124 in this example) and Kibana needs the same version of the Marvel UI plugin.