1
votes

I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and three slave nodes , the slave nodes are listed in the $HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the Master Name node on port 9000, however when I start the datanode on any of the slaves I get the following exception .

2014-08-03 08:04:27,952 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1086620743-xx.xy.23.162-1407064313305 (Datanode Uuid null) service to server1.mydomain.com/xx.xy.23.162:9000 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because hostname cannot be resolved .

The following are the contents of my core-site.xml.

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://server1.mydomain.com:9000</value>
    </property>
</configuration>

Also in my hdfs-site.xml I am not setting any value for dfs.hosts or dfs.hosts.exclude properties.

Thanks.

1

1 Answers

0
votes

Each node needs fully qualified unique hostname.

Your error says

hostname cannot be resolved

Can you cat /etc/hosts file on your each slave an make them having distnct hostname

After that try again