0
votes

I am getting this error in starting my Hadoop cluster nad my namendoe is not starting. Following is the error in the logs: ................... ................... org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible;

The error path /home/ubuntu/hadoop/file:/home/ubuntu/hadoop/hdfs/name does not seem correct. It should only be file:/home/ubuntu/hadoop/hdfs/name.

Does anybody know from where is this path taken?

appended? –

1

1 Answers

0
votes

Check your hdfs-site.xml and core-site.xml. The relevant attributes are:

<configuration>

<property>
  <name>dfs.name.dir</name>
  <value>/var/lib/hadoop/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node should store the name     table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
The default is ${hadoop.tmp.dir}/dfs/name.
  </description>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/var/lib/hadoop/dfs/data</value>
  <description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
  The default is ${hadoop.tmp.dir}/dfs/data.
  </description>
</property>

</configuration>

I suspect that you have a file:// somewhere. remove it and it will use the correct path settings.