0
votes

I am trying to setup a hadoop single node cluster on my local machine. I installed hadoop using the following instructions

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

after starting the cluster using

                        bin/start-all.sh

I get the following output from jps

                       19623 TaskTracker
                       19388 SecondaryNameNode
                       19670 Jps
                       19479 JobTracker

I can see the nameNode is notrunning. I pulled out the logs from the /logs directory and look like this.

2014-01-24 11:30:20,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /app/hadoop/tmp/dfs/name does not exist.
2014-01-24 11:30:20,617 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-01-24 11:30:20,619 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2014-01-24 11:30:20,620 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ishan-HP-Pavilion-dv6700-Notebook-PC/127.0.1.1

It says the direcoty path /app/hadoop/tmp/dfs/name I tried creating this directory path for hadoop user but I got same error again. Can someone please help me fix this.

Please note: I have read similar posts on here but none of them helped.

Thnks !

3

3 Answers

0
votes

I would suggest you check permissions of the directory "/app/hadoop/tmp/dfs/name" given to hduser. Alternatively, you could make sure none of the components (secondary name node etc.) are up and running, and then format the namenode using this command:

$HADOOP_INSTALL/hadoop/bin/hadoop namenode -format

Try to start your cluster again and see if it works.

0
votes

I think "/app/hadoop/tmp/dfs/name" should automatically get created. This folder keeps current info(cache) of namenode. Similarly there must be another folder "namesecondary" with "name" folder. If these folder are not there check "conf/core-site.xml" again. Id's for each daemons is created during each run and these id's are stored in these folders(and also some other information(i don't know exaclty)).

and

if these folders are available

  • just remove all content of "name" folder.
  • format namenode.
  • instead of "start-all" do start-dfs.sh and start-mapred.sh separately

Expecting this should work.

0
votes

You donot have permissions to create this direcory. Try giving a path in your home directory. you should mention this path in core-site.xml for property hadoop.tmp.dir. I have mentioned it in following link http://lets-do-something-big.blogspot.in/2014/01/hadoop-series-single-node-installation.html