0
votes

i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck. I will share the configs i have made:

elasticsearch.yml

cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25

output from /var/log/elasticsearch/hive.log

[2015-01-13 15:18:06,562][INFO ][node                     ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins                  ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node                     ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node                     ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport                ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery                ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service          ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http                     ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node                     ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway                  ] [logstash-central] recovered [0] indices into cluster_state

accessing logstash.example.com:9200 gives the ordinary output like in ES guide:

{
  "status" : 200,
  "name" : "logstash-central",
  "cluster_name" : "hive",
  "version" : {
    "number" : "1.4.2",
    "build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
    "build_timestamp" : "2014-12-16T14:11:12Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"

}

accessing http://logstash.example.com:9200/_status? gives the following:

{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}

Kibanas config.js is default:

 elasticsearch: "http://"+window.location.hostname+":9200"

Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:

server {
listen                *:80 ;
server_name           logstash.example.com;

location / {
root  /usr/share/kibana3;

Logstash config file is /etc/logstash/conf.d/central.conf:

input {
  redis {
    host => "10.1.1.25"
    type => "redis-input"
    data_type => "list"
    key => "logstash"
}

output {
  stdout{ { codec => rubydebug } }
  elasticsearch {
    host => "logstash.example.com"
  }
}

Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).

15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504

netstat -apnt shows the following:

tcp        0      0 10.1.1.25:6379              10.1.1.50:41617             ESTABLISHED 21112/redis-server
tcp        0      0 10.1.1.25:9300              10.1.1.25:44011             ESTABLISHED 22598/java
tcp        0      0 10.1.1.25:9200              10.1.1.35:51145             ESTABLISHED 22598/java
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      22379/nginx    

Could you please tell which way should i investigate the issue?

Thanks in advance

2
is it a type or your cluster configuration file is really elasticsearch.yum?eliasah
it was typo, sorry. already editeduser3909893
I had this error a while ago. I'm not quite sure about the answer I gave you.eliasah

2 Answers

1
votes

The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.

You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.

Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/

And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ

And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2

0
votes

You'll have to precise the protocol for elasticsearch in the output section

elasticsearch {
    host => "logstash.example.com"
    protocol => 'http'
}