0
votes

Hadoop 2.7.1

Main node is cloud1, and another node is cloud2.

I want to set like this. cloud1 has Namenode, Datanode, Nodemanager. cloud2 has Resourcemanager, Datanode, Nodemanager.

And I set up 'yarn-site.xml' like this.

<name>yarn.resourcemanager.hostname</name>
<value>cloud2</value>
<name>yarn.resourcemanager.webapp.address</name>
<value>cloud2</value>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>

But Resourcemanager starts locally.(In cloud1)

I don't know why this happens..

Please help.

1
refer to [yarn-default.xml][1] , for <name>yarn.resourcemanager.webapp.address</name> you should specify port 8088. and my main question is : how do you start hadoop demons? [1]: hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/…masoumeh
@masoumeh I'm starting hadoop daemons with this commands. 'sbin/start-dfs.sh', 'sbin/start-yarn.sh'Cloud
do you execute these commands on cloud1?masoumeh

1 Answers

0
votes

refer to Cluster Setup , you should configure yarn.resourcemanager.nodes.include-path for nodemanagers' hosts and yarn.resourcemanager.address for resource manager's and dfs.hosts for data nodes' and fs.defaultFS for namenode's and do this on both of cloud1 and cloud2. notice that you should

List all slave hostnames or IP addresses in your etc/hadoop/slaves file, one per line

to use 'sbin/start-dfs.sh', 'sbin/start-yarn.sh'. Do these instructions and tell me the result.