3
votes

I'm running Hadoop2.5.2 on 3 machines with Ubuntu Server 14.04

One is the namenode and resourcemanager with ip 192.168.3.1 Others are slaves running datanode and nodemanager with ip 192.168.3.102 and 192.168.3.104 respectively.

I can run start-hdfs.sh and start-yarn.sh without any errors. The website of HDFS and YARN works well, I can visit both website on my browser and see status of two slaves.

But when I try to run the example of mapreduce under ~/hadoop/share/hadoop/mapreduce' via yarn jar hadoop-mapreduce-examples-2.5.2.jar pi 14 1000 The process gets stuck on INFO mapreduce.job: Running job: ...

The website of yarn shows that there is one container on a slave and the state of application is accepted.

When I tpye 'jps' on the slave

20265 MRAppMaster
20351 Jps
19206 DataNode
20019 NodeManager

The syslog file on the slave:

INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...

It seems that the slave doesn't use the defalut RM ip address instead the real one on 192.168.3.1

Here is my configuration on the slaves: yarn-site.xml

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>192.168.3.1</value>
</property> 
<property>
    <name>yarn.resourcemanager.address</name>
    <value>192.168.3.1:8032</value>
</property>  
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>192.168.3.1:8030</value>
</property>

<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>192.168.3.1:8031</value>
</property>

<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>192.168.3.1:8088</value>
</property>

<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>192.168.3.1:8033</value>
</property> 
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

hdfs-site.xml

<configuration>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///home/hduser/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>

<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>

<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>

<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
</configuration>

core-site.xml

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.3.1:8020</value>
    <description>NameNode URI</description>
</property>
</configuration>

mapred-site.xml

19 <configuration>
20 
21 <property>
22     <name>mapreduce.framework.name</name>
23     <value>yarn</value>
24     <description>Use YARN</description>
25 </property>

The configuration on master is almost same except yarn-site.xml

65 <property>
66     <name>yarn.nodemanager.aux-services</name>
67     <value>mapreduce_shuffle</value>
68 </property>
69 
70 <property>
71     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
72     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
73 </property>

And I Change the yarn-env.sh export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/etc/hadoop}"

I don't change the /etc/hosts

Does anyone know how can I fix it? Thanks

if need other information just tell me. I'll update..

2

2 Answers

0
votes
INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...

Its trying to connect to the resource manager. Seems like its not running.

Check the resource manager service.

0
votes

Finally I found out it myself.

I downloaded a new version of Hadoop-2.6.0 source code and built it on my own machine.

The configuration was same as the 2.5.2 version but it just work !

I think it's a better way to start with the source code instead of the built one.