14
votes

i have written a following hbase client class for remote server:

System.out.println("Hbase Demo Application ");

            // CONFIGURATION

                // ENSURE RUNNING
            try {
                HBaseConfiguration config = new HBaseConfiguration();
                config.clear();
                config.set("hbase.zookeeper.quorum", "192.168.15.20");
                config.set("hbase.zookeeper.property.clientPort","2181");
                config.set("hbase.master", "192.168.15.20:60000");
                //HBaseConfiguration config = HBaseConfiguration.create();
    //config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally
                HBaseAdmin.checkHBaseAvailable(config);


                System.out.println("HBase is running!");
            //  createTable(config);    
                //creating a new table
                HTable table = new HTable(config, "mytable");
                System.out.println("Table mytable obtained ");  
                addData(table);
            } catch (MasterNotRunningException e) {
                System.out.println("HBase is not running!");
                System.exit(1);
            }catch (Exception ce){ ce.printStackTrace();

it is throwing some exception:

Oct 17, 2011 1:43:54 PM org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation getMaster
INFO: getMaster attempt 0 of 1 failed; no more retrying.
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:328)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:883)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:750)
    at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
    at $Proxy4.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:419)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
    at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:359)
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:89)
    at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:1215)
    at com.ifkaar.hbase.HBaseDemo.main(HBaseDemo.java:31)
HBase is not running!

can you tell me why is it throwing an exception, what is wrong with code and how to solve it.

5
Please accept answers for your old questions before asking new questions. No one would be willing to answer your questions since you are not giving any feedback.frail
ok. from now on i will. i was new user to this site.normally i do vote for the answer which helps me.Thank youAli Raza
config.set("hbase.master", "192.168.15.20:60000"); can you comment out this line and try again ?frail
i have tried again but the result is same.Ali Raza
can you telnet 192.168.15.20:2181 (is there a connection issue?)frail

5 Answers

22
votes

This problem is occuring due to your HBase server's hosts file.
You just need to edit you HBase server's /etc/hosts file.
Remove the localhost entry from that file and put the localhost entry in front of HBase server IP.

For example, your HBase server's /etc/hosts files seems like this:

127.0.0.1 localhost
192.166.66.66 xyz.hbase.com hbase

You have to change it like this by removing localhost:

# 127.0.0.1 localhost # line commented out
192.166.66.66 xyz.hbase.com hbase localhost # note: localhost added here

This is because when remote machine asks hbase server machine where HMaster is running, it tells that it is running on localhost.
So if the entry is 127.0.0.1 then HBase server returns this address and remote machine start to find HMaster on its own machine (locally).
When we change that with the HBase Server IP then everything works fine :)

2
votes

I agree.. The HBase is very sensitive to /etc/hosts configurations.. I had to set the zeekeeper bindings property in the hbase-site.xml correctly in order for the above mentioned Java code to work...For example: I had to set it as follows:

{property}
  {name}hbase.zookeeper.quorum{/name}
  {value}www.remoterg12.net{/value}      {!-- this is the externally accessible domain --}
{/property}
{property}
  {name}hbase.zookeeper.property.clientPort{/name}
  {value}2181{/value}              {!-- everything needs to be externally accessible --}
{/property}
{property}
  {name}hbase.master.info.port{/name}    {!--   http://www.remoterg12.net:60010/ --}
  {value}60010{/value}
{/property}
{property}
  {name}hbase.master.info.bindAddress{/name}
  {value}www.remoterg12.net{/value}      {!-- Use this to access the GUI console, --}
{/property}

The Remote GUI will give you a clear picture of the Binding Domains.. For example, the [HBase Master] property in the "GUI Web console" should be something like this: www.remoterg12.net:60010 (It should NOT be localhost:60010 )... AND YES!!, I did have to play around with the /etc/hosts just right as I didn't want to mess up the existing Apache configs :-)

0
votes

The same problem can be solve by editing the conf/regionservers file in hbase directory to add the Hbase server (Remote) in it . Then no need to change the etc/hosts file

After editing conf/regionservers will look like:

localhost  
ip address of the remote hbase server

eg

localhost              
10.132.258.366
0
votes

Exact same problem here with HBase 1.1.3. 2 virtuals machines (Ubuntu) on the same network. The logs show that the client can reach Zookeeper but not the HBase server.

TL;DR: remove the following line in /etc/hosts on the server (server_hostame):

127.0.1.1 server_hostname server_hostname

And add this one with 127.x.y.z the ip of your server on the (local) network:

192.x.y.z server_hostname

I tried a lot of combinations on the client and server sides. In standalone mode I don't think there is a better approach. Not really proud of that. It is a shame to have to mess with the network configuration and to not even provide a HBase shell client able to connect remotely to a server (welcome to the Java world of illusions...)

On the server side, leave the files conf/hbase-site.xml empty. You don't need to put a Zookeeper configuration in here, defaults are fine. Same for etc/regionservers. Leave it with the default entry (localhost) because I don't think in standalone mode it really cares (and I tried to put server_hostname in it and of course this does not works).

On the client side, It must know the server by hostname if you want to resolve with it so again add an entry in your /etc/hosts client file for the server.

As a bonus I give you my sbt configuration and some complete working code for the client since the HBase team seems to have spent the documentation budget at Vegas for the last 4 years (again, welcome the «Business ready» world of Java/Scala).

build.sbt:

libraryDependencies ++= Seq(
  ...
  "org.apache.hadoop" % "hadoop-core" % "1.2.1",
  "org.apache.hbase" % "hbase" % "1.1.2",
  "org.apache.hbase" % "hbase-client" % "1.1.2",
)

some_client_code.scala:

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{HTable, Put, HBaseAdmin}
import org.apache.hadoop.hbase.util.Bytes

val hbaseConf = HBaseConfiguration.create()
hbaseConf.set("hbase.zookeeper.quorum", "server_hostname")
HBaseAdmin.checkHBaseAvailable(hbaseConf)

val table = new HTable(hbaseConf, "my_hbase_table")
val put = new Put(Bytes.toBytes("row_key"))
put.add(Bytes.toBytes("cf"), Bytes.toBytes("colId1"), Bytes.toBytes("foo"))
0
votes

I know it is too late to answer this question but I want to share my way of resolving a similar issue.

I had the same issue and I tried to set the zookeeper quorum from the java program and also tried via the CLI but none of them worked.

I am using CDH 5.7.7 with HBase version 1.1.0 Finally I had to export few configs to the Hadoop classpath to fix the issue. Here is config that I have exported.

export HADOOP_CLASSPATH=/etc/hadoop/conf:/usr/share/cmf/lib/cdh5/hbase-protocol-0.98.1-cdh5.5.0.jar:/etc/hbase/conf:/driven/conf

Hope this helps.