0
votes

I am experiencing a famous "could only be replicated to 0 nodes, instead of 1" error in my Single-Node Hadoop installation while trying to add some file to hdfs:

$ hdfs dfs -mkdir -p /grep/in 
$ hdfs dfs -put /hadoop_install_location/etc/hadoop/* /grep/in/ 

First command runs ok, second produces an exception:

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
put: File /grep/in/yarn-site.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.

This exception can be found only in namenode's log. In datanode log there are no exceptions at the moment of running that command.

Whats wrong with my setup? I was just folowwing the steps from this tutorial: http://elephantscale.com/hadoop2_handbook/Hadoop2_Single_Node.html.

I have heard that I need to turn off IPv6, but I didn't this, is this important? Also, when I call stop-dfs.sh an exception is printed in datanode log:

DataNode: IOException in offerService java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "debian/127.0.1.1"; destination host is: "localhost":8020;

But this happens even if I do not run ny put command, this happens just everytime I shutdown the dfs.

Namenode webUI says:

Configured Capacity: 1.87 GB DFS Used: 28 KB Non DFS Used: 1.65 GB DFS Remaining: 230.8 MB DFS Used%: 0% DFS Remaining%: 12.04% Block Pool Used: 28 KB Block Pool Used%: 0% DataNodes usages% (Min/Median/Max/stdDev): 0.00% / 0.00% / 0.00% / 0.00% Live Nodes 1 (Decommissioned: 0) Dead Nodes 0 (Decommissioned: 0)

etc/hadoop/slaves has only localhost line,

/etc/hosts has 127.0.0.1 localhost and 127.0.1.1 debian lines.

How can I fix it? Could you advice me what else can I check and how?

1
is your datanode listed in live nodes?vishnu viswanath
hdfs dfsadmin -report says lists it liveMiamiBeach
there is a process, says jps. and there are no issues with permissions - everything is done as root.MiamiBeach
try removing the entry corresponding to 127.0.1.1 from etc/hostsvishnu viswanath

1 Answers

0
votes

Finally I have solved this problem by providing my system with a bigger drive space.

I am using VirtualBox, so I had to reinstall the whole operation system and hadoop. With the new settings it works ok now. So the main guess is that the problem was related to the amount of free space available. 240MB is not enough even for a single-node setup.