I had a little hadoop cluster(4 servers),and install with hbase,but it not work well. after i tab 'start-hbase.sh',in 3 HRegionServer's log
2016-07-27 21:29:55,122 WARN [ResponseProcessor for block BP-1601089490-xx.xx.xx.xx-1469276064635:blk_1073742337_1586] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1601089490-xx.xx.xx.xx-1469276064635:blk_1073742337_1586 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2000) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:798) 2016-07-27 21:29:55,223 WARN [DataStreamer for file /hbase/WALs/server2,16020,1469669327730/server2%2C16020%2C1469669327730.default.1469669334510 block BP-1601089490-xx.xx.xx.xx-1469276064635:blk_1073742337_1586] hdfs.DFSClient: Error Recovery for block BP-1601089490-xx.xx.xx.xx-1469276064635:blk_1073742337_1586 in pipeline xx.xx.xx.200:50010, xx.xx.xx.20:50010: bad datanode xx.xx.xx.200:50010 2016-07-27 21:29:55,247 WARN [DataStreamer for file /hbase/WALs/server2,16020,1469669327730/server2%2C16020%2C1469669327730.default.1469669334510 block BP-1601089490-xx.xx.xx.xx-1469276064635:blk_1073742337_1586] hdfs.DFSClient: DataStreamer Exception java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[xx.xx.xx.20:50010], original=[xx.xx.xx.20:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:933) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
Soon after,there no HRegionServer alive. I had configure hdfs-site.xml with
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value> </property>
<property> <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>NEVER</value>
</property>
it still log this WARN,I had google it,but nothing to do.So, can anybody help me?