1
votes

I have installed Hadoop 2.2 on my laptop running ubuntu as single node cluster and run the word count example. After that I installed Hive and Hadoop started to give error i.e.

hdfs dfs -ls throws IOException : localhost is "utbuntu/127.0.1.1 and destination host is localhost:9000"

I found the below two entries in my hosts file

127.0.0.1 localhost
127.0.1.1 ubuntu
#and some IPv6 entries...

My question is why it is giving error after configuring hive and what is the solution? Any help is really appreciated.

Thanks!

2
Try commenting out the second entry in your /etc/hosts file (the 12.0.1.1), and try again (if that failes, restart your hdfs services and try once more)Chris White
Hi, Many thanks! I commented the second entry but still the error is same.. I think there is something associated with the Hive installation. I have found something related here but still couldn't able to get rid out of this error.asi24

2 Answers

0
votes

There seems to be a typo 'utbuntu' in your original IOException. Can you please check it that's the right hostname or a copy-paste error?

The etc/hosts configs took a bit of trial and error to figure out for a Hadoop 2.2.0 cluster setup but what I did was remove all 127.0.1.1 assignments to the hostname and assigned the actual IP to the machine name and it works. e.g.

192.168.1.101 ubuntu

I have a 2-node cluster so my /etc/hosts for master (NameNode) looks like:

127.0.0.1   localhost
#127.0.1.1  myhostname
192.168.1.100   myhostname
192.168.1.100   master

And /usr/local/hadoop/etc/hadoop/core-site.xml has the following:

<property>
   <name>fs.default.name</name>
   <value>hdfs://master:9000</value>
 </property>

The main thing to note is that I've commented out the myhostname to 127.0.1.1 association.

0
votes

I also had this issue, because my machine had start php-fpm with port 9000, so I kill php-fpm , then restart is ok.