22
votes

I'm running Hadoop 1.1.2 on a cluster with 10+ machines. I would like to nicely scale up and down, both for HDFS and MapReduce. By "nicely", I mean that I require that data not be lost (allow HDFS nodes to decomission), and nodes running a task finish before shutting down.

I've noticed the datanode process dies once decomissioning is done, which is good. This is what I do to remove a node:

  • Add node to mapred.exclude
  • Add node to hdfs.exclude
  • $ hadoop mradmin -refreshNodes
  • $ hadoop dfsadmin -refreshNodes
  • $ hadoop-daemon.sh stop tasktracker

To add the node back in (assuming it was removed like above), this is what I'm doing.

  • Remove from mapred.exclude
  • Remove from hdfs.exclude
  • $ hadoop mradmin -refreshNodes
  • $ hadoop dfsadmin -refreshNodes
  • $ hadoop-daemon.sh start tasktracker
  • $ hadoop-daemon.sh start datanode

Is this the correct way to scale up and down "nicely"? When scaling down, I'm noticing job-duration rises sharply for certain unlucky jobs (since the tasks they had running on the removed node need to be re-scheduled).

3

3 Answers

29
votes

If you have not set dfs exclude file before, follow 1-3. Else start from 4.

  1. Shut down the NameNode.
  2. Set dfs.hosts.exclude to point to an empty exclude file.
  3. Restart NameNode.
  4. In the dfs exclude file, specify the nodes using the full hostname or IP or IP:port format.
  5. Do the same in mapred.exclude
  6. execute bin/hadoop dfsadmin -refreshNodes. This forces the NameNode to reread the exclude file and start the decommissioning process.
  7. execute bin/hadoop mradmin -refreshNodes
  8. Monitor the NameNode and JobTracker web UI and confirm the decommission process is in progress. It can take a few seconds to update. Messages like "Decommission complete for node XXXX.XXXX.X.XX:XXXXX" will appear in the NameNode log files when it finishes decommissioning, at which point you can remove the nodes from the cluster.
  9. When the process has completed, the namenode UI will list the datanode as decommissioned. The Jobtracker page will show the updated number of active nodes. Run bin/hadoop dfsadmin -report to verify. Stop the datanode and tasktracker process on the excluded node(s).
  10. If you do not plan to reintroduce the machine to the cluster, remove it from the include and exclude files.

To add a node as datanode and tasktracker see Hadoop FAQ page

EDIT : When a live node is to be removed from the cluster, what happens to the Job ?

The jobs running on a node to be de-commissioned would get affected as the tasks of the job scheduled on that node(s) would be marked as KILLED_UNCLEAN (for map and reduce tasks) or KILLED (for job setup and cleanup tasks). See line 4633 in JobTracker.java for details. The job will be informed to fail that task. Most of the time, Job tracker will reschedule execution. However, after many repeated failures it may instead decide to allow the entire job to fail or succeed. See line 2957 onwards in JobInProgress.java.

3
votes

You should be aware that since for Hadoop to perform well, it really wants to have the data available in multiple copies. By removing nodes, you remove the chances of the data being optimally available, and you put extra stress on the cluster to ensure the availablility.

I.e. by taking down a node, you do enfore that an extra copy of all its data is made somewhere else. So you shouldn't really be doing this just for fun, not unless you use a different data management paradigm than in the default configuration (= keep 3 copies in the cluster).

And for a Hadoop cluster to perform well, you will want to actually store the data in the cluster. Otherwise, you can't really move the computation to the data, because the data isn't there yet either. Much about Hadoop is about having "smart drives" that can perform computation before sending the data across the network.

So in order to make this reasonable, you will likely need to somehow split your cluster. Have one set of nodes keep the 3 master copies of the original data, and have some "add-on" nodes that are only used for storing intermediate data and perform computations on that part. Never change the master nodes, so they don't need to redistribute your data. Shut down add-on nodes only when they are empty? But that probably is not yet implemented.

0
votes

While decommissioning in progress, temporary or staging files get cleaned automatically. These files are missing now and hadoop is not recognizing how that went missing. So the decommissioning process keeps waiting until that is resolved even though the actual decommissioning is done for all the other files.

In Hadoop GUI - if you notice the parameter "Number of Under-Replicated Blocks" is not reducing over the time or almost constant then this is the reason likely.

So list the files using below command

hadoop fsck / -files -blocks -racks

If you see those files are temporary and not required then delete those files or folder

Example: hadoop fs -rmr /var/local/hadoop/hadoop/.staging/* (give the correct path here)

This would solve the problem immediately. De-commissioned nodes will move to Dead Nodes in 5 mins.