20
votes

I config Hadoop in windows 7 from tutorial It setting up a Single Node Cluster. When run hdfs namenode -format to format namenode it throw exception like: And when start-all.cmd the windows namenode auto forced then I can open namenode GUI in address – http://localhost:50070.

16/01/19 15:18:58 WARN namenode.FSEditLog: No class configured for C, dfs.namenode.edits.journal-plugin.C is empty
16/01/19 15:18:58 ERROR namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: No class configured for C at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getJournalClass(FSEditLog.java:1615) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1629) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:282) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 16/01/19 15:18:58 INFO util.ExitUtil: Exiting with status 1 16/01/19 15:18:58 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************

Core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>   

hdfs-site.xml

<configuration>
   <property>
       <name>dfs.replication</name>
       <value>1</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>C:/hadoop/data/namenode</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
       <value>C:/hadoop/data/datanode</value>
   </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
   <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
   </property>
   <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
</configuration>
3

3 Answers

45
votes

Change your following properties from:

<property>
   <name>dfs.namenode.name.dir</name>
   <value>C:/hadoop/data/namenode</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>C:/hadoop/data/datanode</value>
</property>

To:

<property>
   <name>dfs.namenode.name.dir</name>
   <value>/hadoop/data/namenode</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>/hadoop/data/datanode</value>
</property>
12
votes

For windows, directories should be similar to this format /c:/path/to/dir or file:///D:/path/to/dir:

I've tried using "/hadoop/data/namenode" which prevents starting namenode due to non existence of specified namenode directory .. I have found it is storing files in c drive when using "/hadoop/data/namenode" but while starting dfs it is gonna resolve paths relatively to the drive where hadoop source is residing.

I have switch to use the following and it worked fine:

<property>
   <name>dfs.namenode.name.dir</name>
   <value>/d:/hadoop/data/namenode</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>/d:/hadoop/data/datanode</value>
</property>

Hint: Don't forget the prefix slash before drive name /d:/

0
votes

I was able to solve this issue by adding namenode and datanode in hadoop root location and using

<property>
    <name>dfs.namenode.name.dir</name>
    <value>F:\hadoop-2.7.2\data\namenode</value>
</property>

Instead of using backward slash, use the forward slashes F:/hadoop-2.7.2/data/namenode Alternatively make it a valid URI file:///f:/hadoop-2.7.2/data/namenode