1
votes

I have installed hadoop 1.2.1 on linux with single node cluster configuration. It was running fine and the jps command was displaying the information of all 5 jobs

  • JobTracker
  • NameNode
  • TaskTracker
  • SecondaryNameNode
  • jps
  • DataNode.`

Now, when I start the hadoop using command bin/start-all.sh, hadoop starts all 5 jobs but within few seconds namenode shuts down itself.

Any ideas how can I solve this issue?

I have checked the namenode log file and it shows the following error:

 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Edit log corruption detected: corruption length = 98362 > toleration length = 0; the corruption is intolerable.
1

1 Answers

1
votes

This is been asked many times and answered as well, searching with the exception message would give you the results.
Before asking questions in Stackoverflow, please check for same kind of question is asked earlier by search option at top right corner.
coming to the problem statement,
It was most probably due to the hadoop.tmp.dir where your namenode stores the edit logs and and check point data.
After every reboot of your machine, tmp folder will be cleared by many services which causing the problem while trying to access by namenode again.
so only the length is 0 after you reboot it.
in core-site.xml change the property hadoop.tmp.dir directory to other directory.
Reference is : here
Hope it helps!