0
votes

I have ganglia set up on a cluster of servers, all of which have gmond, one of which has gmetad, and one which has log stash and elasticsearch. I’d like to use Logstash’s ganglia input plugin to collect data directly from the monitoring daemons, but I’ve been unsuccessful so far. My logstash logs always show:

{:timestamp=>"2015-07-14T14:33:25.192000+0000", :message=>"ganglia udp listener died", :address=>"10.1.10.178:8664", :exception=>#, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in bind'", "/opt/logstash/lib/logstash/inputs/ganglia.rb:61:inudp_listener'", "/opt/logstash/lib/logstash/inputs/ganglia.rb:39:in run'", "/opt/logstash/lib/logstash/pipeline.rb:163:ininputworker'", "/opt/logstash/lib/logstash/pipeline.rb:157:in `start_input'"], :level=>:warn}

Here's the input config I've been testing with:

input {
  ganglia {
    host => "10.1.10.178"  #ip of logstash node
    port => 8666
    type => "ganglia_test"
  }
}

and I have this in gmond.conf on one of the gmond nodes

udp_send_channel {
  host = 10.1.10.178  #logstash node
  port = 8666
  bind_hostname = yes
}
1

1 Answers

0
votes

I've found this problem too. It looks like there's a bug in the Ganglia listener since about version 1.2 (i know it used to work in 1.1..)

I managed to work around the problem by adding an explicit 'UDP' listener. This seems to satisfy logstash and allows the Ganglia listener to keep running.

e.g.

    input {
       udp {
        port => "1112"
        type => "dummy"
       }
       ganglia {
        port => "8666"
        type => "ganglia"
       }
    }