0
votes

I'm trying to connect Java to HBase in remote server.

Here is the case :

The java class is in my local computer (192.168.111.165)

The Hadoop File System and HBase is in another server (192.168.11.7)

Here is my Hadoop's core-site.xml (in 192.168.11.7) :

<configuration>
  <property>
        <name>fs.default.name</name>
        <value>hdfs://192.168.11.7:9000</value>
  </property>
</configuration>

Here is my Hadoop's hdfs-site.xml (in 192.168.11.7) :

<configuration>
  <property>
        <name>dfs.replication</name>
        <value>1</value>
  </property>
  <property>
    <name>dfs.name.dir</name>
    <value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
  </property>

  <property>
    <name>dfs.data.dir</name>
    <value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>
  </property>   
</configuration>

Here is my HBase's hbase-site.xml (in 192.168.11.7) :

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://192.168.11.7:9000/hbase</value>
  </property>

  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hadoop/zookeeper</value>
  </property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
</configuration>

I have turned on HDFS and HBase successfully (proven with I can run "list" command in HBase shell). Note that I run the "list" command in HBase Shell using ssh to 192.168.11.7 from my local computer. From my local computer (192.168.111.165), I can access hadoop file system and hbase using browser (hadoop by hitting URL : http://192.168.11.7:50070/, hadoop cluster by hitting URL : http://192.168.11.7:8088/, HBase by hitting URL : http://192.168.11.7:16010/)

Now, I develop a Java Code in my local computer (192.168.111.165), which has the hbase-site.xml :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  <configuration>
    <property>
      <name>hbase.rootdir</name>
      <value>hdfs://192.168.11.7:9000/hbase</value>
    </property>
</configuration>

and my Main code as follows :

private static Configuration conf = null;

public static void main(String[] args) throws Exception {
    try {
        conf = HBaseConfiguration.create();
        String tablename = "scores";
        String[] familys = { "grade", "course" };
        creatTable(tablename, familys);
    } catch (Exception e) {
        e.printStackTrace();

        throw e;
    }
}

public void creatTable(String tableName, String[] familys)
        throws Exception {
    HBaseAdmin admin = new HBaseAdmin(conf);
    if (admin.tableExists(tableName)) {
        System.out.println("table already exists!");
    } else {
        HTableDescriptor tableDesc = new HTableDescriptor(tableName);
        for (int i = 0; i < familys.length; i++) {
            tableDesc.addFamily(new HColumnDescriptor(familys[i]));
        }
        admin.createTable(tableDesc);
        System.out.println("create table " + tableName + " ok.");
    }
}

When I run the code, I got Exception, which has the stacktrace :

org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:319)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:161)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:416)
at com.volt.kismistest.handler.TestHBaseHandler.creatTable(TestHBaseHandler.java:82)
at com.volt.kismistest.handler.TestHBaseHandler.doAction(TestHBaseHandler.java:47)
at com.volt.kismistest.handler.TestHBaseHandler.doAction(TestHBaseHandler.java:1)

Note in the exception is :

TestHBaseHandler is the name of my java class

TestHBaseHandler.java:82 is : if (admin.tableExists(tableName)) { on the creatTable method

TestHBaseHandler.java:47 is : creatTable(tablename, familys); on the main method

Can anyone help me fixing this problem?

Thank you very much

2

2 Answers

1
votes

You said you could run list, but can you run create and scan on a particular table? It's possible that you forgot to start the region server. I would run something like this:

create testtable, {NAME => 'testcf'} to see if that creates it in the shell as well. If that somehow works, run scan 'testtable' to verify that you can scan it.

Could also be a firewall issue. The region server might be up but its ports firewalled off.

Another thing, you can also call HBaseAdmin#available to check whether or not HBase is up and running from your client.

1
votes

Just a guess here as many issues might depend on how/where HBase has been installed or your network configuration.

In HBase, most of the work is done by the slave nodes and the master node is just your main 'point of contact'. Most of the times, you will hit the master that will just return you the IP of the slave node (it might be very well an internal IP) that is supposed to fulfill the operation (reads/writes for sure, possibly also for create table). If your client cannot resolve or does not have access to that slave IP you might run into connectivity issues. The Exceptions that you get back might vary.

Things to try:

  • Can you create the table from the HBase shell on master node?
    • NOTE: list is not enough as it might just be served by the master node...
  • Does the command work through the REST api? (you can start it from the master shell)