0
votes

As i am in learning phase of Hadoop, I am having a problem with Hadoop single cluster setup. I am using Hadoop 2.9.0 and Java 8. I have completed the setup and it is as below

core-site.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/ubuntu/hadooptmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

mapred-site.xml

<configuration>
   <property>
      <name>mapreduce.framework.name</name>
      <value>localhost:9001</value>
   </property>
</configuration>

yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property><name>dfs.name.dir</name>
<value>file:///home/ubuntu/hadoop/hdfs/namenode</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/ubuntu/hadoop/hdfs/datanode</value>
<final>true</final>
</property>
</configuration>

Now the value of dfs.replication in hdfs-site.xml is 1. Now I am doing start-all.sh and if i check status -

7536 ResourceManager
7025 NameNode
7378 SecondaryNameNode
8147 Jps
7667 NodeManager

Now I have stop-all.sh and if i change the value of dfs.replication in hdfs-site.xml to 0(some people mentioned this as solution), and again start-all.sh. Status is -

9024 DataNode
9362 ResourceManager
9493 NodeManager
9815 Jps

As we can see Name node stopped running in this case. I have tried formatting both but did not work out. I not understanding what I am doing wrong in setup that I am unable to run Namenode and datanode simultaneously.

I have done all the single cluster setup on aws ubuntu server. I have tried giving value of mapreduce.framework.name in mapred-site.xml as "yarn" but did not helped. Please let me know if there is a solution for this.

Datanode log added below

2018-08-23 15:08:14,494 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2018-08-23 15:08:14,506 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/ubuntu/hadoop/hdfs/datanode/in_use.lock acquired by nodename [email protected]
2018-08-23 15:08:14,509 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/home/ubuntu/hadoop/hdfs/datanode/
java.io.IOException: Incompatible clusterIDs in /home/ubuntu/hadoop/hdfs/datanode: namenode clusterID = CID-e5d9317d-7c4d-40eb-8fa2-030c5a8cfad6; datanode clusterID = CID-f416763a-ed6f-41d7-8b9d-12c298c5d779
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-08-23 15:08:14,516 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 85d94537-7ccc-4d7d-aa2c-48c65a453399) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1649)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1610)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:374)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
        at java.lang.Thread.run(Thread.java:748)
2018-08-23 15:08:14,516 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid 85d94537-7ccc-4d7d-aa2c-48c65a453399) service to localhost/127.0.0.1:9000
2018-08-23 15:08:14,518 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid 85d94537-7ccc-4d7d-aa2c-48c65a453399)
2018-08-23 15:08:16,520 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2018-08-23 15:08:16,531 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ip-172-31-29-76.us-east-2.compute.internal/172.31.29.76
1
dfs.replication as 0 is not the solution, change your dfs.replication as 1 do a start-all.sh and if datanode doesn't start, share the logs of datanode that should give some pointers to the issue.Shiva Kumar SS
It would be great if you could show the logs for the service. Are you formatting the namenode? Hdfs should also be not be set to localhost. The MapReduce framework needs to be yarn also, it's not a server address... That is defined by the yarn-site.xmlOneCricketeer
Please find the Datanode log added in the question. Hope this helps.Mike

1 Answers

0
votes

I think the problem was conflict between the ClusterID of name node and datanode. I ran a command "sudo rm -rf /home/ubuntu/hadoop/hdfs/datanode/*" which fixed the issue now the Namenode and Datanode both running.

4632 NameNode
4793 DataNode
5145 ResourceManager
5531 Jps
5276 NodeManager
4989 SecondaryNameNode

Thanks for your help folks. Let me know if I have missed anything.