0
votes

all

I have setupped a hadoop cluster in fully distributed mode. First, I set core-site.xml "fs.default.name" and mapred-site.xml "mapred.job.tracker" in hostname:port format, and chang /etc/hosts correspondingly, the cluster works succesfully.

Then I use another way, I set set core-site.xml "fs.default.name" and mapred-site.xml "mapred.job.tracker" in ip:port format. It dosen't work.

I find ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'... in namenode log file and

ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhos t name. Using 'localhost'... java.net.UnknownHostException: slave01: slave01: Name or service not known

in datanode log file.

In my opinion,ip and hostname is equivalent. Is there something wrong in my hadoop conf?

3
Not sure why the exception, but it's from InetAddress.getLocalHost().getHostName(). Here is the code for MetricsSystemImpl.java. I don't think it's a problem with the configuration files.Praveen Sripati
why can I succeed in first way? The only changed factor is the configuration file, So I infer that there is something wrong with ip:port formatsoulmachine
maybe this would solve your problem [enter link description here][1] [1]: stackoverflow.com/questions/16725804/…city

3 Answers

0
votes

maybe there is a wrong configured hostname in /etc,

you should check hostname /etc/hosts /etc/HOSTNAME (rhel/debian) or rc.conf (archlinux) etc.

0
votes

I got your point. This is because of that you probably wrote in mapred-site.xml, hdfs://ip:port (it starts with hdfs, this is wrong) but when you write hostname:port, you probably did not write hdfs at the beginning of the value which is correct way. THerefore, firstone did not work,but, second has worked

Fatih haltas

0
votes

I found answer here.

It seems that HDFS uses host name only for it's all communication and display purposes, so we can NOT use ip directly in core-site.xml and mapred-site.xml