0
votes

I have hadoop 1.2.1 set up in my 3 machines. When decommissioned a machine it worked fine. But when commissioning a new datanode, it brings down my other 2 datanodes.

Set up is as follows:

  • 192.168.1.4 -- Namenode, SecondaryNameNode, DataNode, Tasktracker, Jobtracker
  • 192.168.1.5 -- DataNode, Tasktracker
  • 192.168.1.6 -- DataNode, Tasktracker

I have set replication factor as 2 across all the machines.

Steps I followed to commission a datanode:

First started my cluster with 192.168.1.4 and 192.168.1.5 using start scripts. Updated my include file with 192.168.1.6

bin/hadoop dfsadmin -refreshNodes

bin/hadoop mradmin -refreshNodes

Updated slaves file.

bin/hadoop dfsadmin -report -- This shows my intitial running datanodes as dead and the included datanode alive.

Please let me know whats wrong in this process and why the other datanodes are brought down.

2

2 Answers

0
votes

i am not sure on what problem could have occured , i tried commissioning and it worked fine but you can do one thing execute the following commands on the dead datanodes individually

1)./bin/hadoop-daemon.sh start datanode 2)./bin/hadoop-daemon.sh start tasktracker

after that execute the following command on the namenode 1)bin/hadoop dfsadmin -report

-1
votes

Initially for Hadoop dfsadmin -report The node added as part of commissioning showing as dead node

Done the below steps to overcome the problem Yes the answer for this

1) Go to Datanodes that you added as a part of commissioning
2) Execute the commands if it is Hadoop 2.x 
   /usr/local/hadoop-2.7.2/sbin$ hadoop-daemon.sh start datanode
   /usr/local/hadoop-2.7.2/sbin$ yarn-daemon.sh start nodemanger
3) The NameNode URI Its showing the added node as live node