0
votes

I'm trying to set up a hadoop multi-node cluster and I'm getting the following problem. I have one node as master and another one as slave.

It seems that everything is all right because when I execute {jps} I get this processes for the master:

{
29983 SecondaryNameNode
30596 Jps
29671 NameNode
30142 ResourceManager
}

And this ones for the slave:

{  
18096 NodeManager  
17847 DataNode  
18197 Jps  
}  

Unfortunately, when I try -put command, I get this error:

hduser@master:/usr/local/hadoop/bin$ ./hdfs dfs -put /home/hduser/Ejemplos/fichero /Ejemplos/ 14/03/24 12:49:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/03/24 12:49:07 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /Ejemplos/fichero.COPYING could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

When I go to the WebUI, there are 0 live nodes and I don't know why! I can't fix this error and I would appreciate some help!

4

4 Answers

1
votes
File /Ejemplos/fichero.COPYING could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

the above error means, data nodes are either down or unable to properly communicate with namenode. You can check for the configuration which you specified in hdfs-site.xml, core-site.xml.

1
votes

I had similar issue and solved it like this:

  1. stop hadoop (stop-dfs.sh and stop-yarn.sh)
  2. manually delete dfs/namenode and dfs/datanode directories
  3. format namenode (hdfs namenode -format)
  4. start hadoop (start-dfs.sh and start-yarn.sh)

There might be other issues like lack of disk space or slaves not configured in $HADOOP_HOME/etc/hadoop (this is actually configured to localhost by default.)

0
votes

you are using as

./hdfs dfs -put....

Try

hadoop fs -put LocalFile.name /username/

or

hadoop fs -copyFromLocal LocalFile.name /username/
0
votes

You will want to check the log files of your data node (slave) for errors in your set up. If you run cloudera CDH, you'll find these in /var/log/hadoop-hdfs, otherwise in the directory specified in your config.

The error "could only be replicated to 0 nodes" points to a problem there.

Also make sure that slave and master can connect via ssh with key authentication.

Just a quick question: you did format your namenode?