I am running Hbase
on standalone cloudera vm. I am just trying to connect to hbase from my java code but getting this exception all time. I have not done any entry in the hbase-site.xml
. Also I have Hbase and zookeper up and running. Below is exception I am getting:
ming-2.6.0-cdh5.7.0.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/lib/native
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.el6.x86_64
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:user.name=cloudera
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/cloudera
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/cloudera
16/08/03 23:04:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x2da61880x0, quorum=localhost:2181, baseZNode=/hbase
16/08/03 23:04:33 INFO zookeeper.ClientCnxn: Opening socket connection to server quickstart.cloudera/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16/08/03 23:04:33 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /127.0.0.1:56535, server: quickstart.cloudera/127.0.0.1:2181
16/08/03 23:04:33 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/127.0.0.1:2181, sessionid = 0x1562b869790000e, negotiated timeout = 40000
16/08/03 23:04:33 INFO mapreduce.HFileOutputFormat: Looking up current regions for table user;hconnection-0x2da6188 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Wed Aug 03 23:05:21 PDT 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68660: row 'user,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1469171048401, seqNum=0
This is my java code .
public int run(String[] args) throws Exception {
int result=0;
String outputPath = args[1];
Configuration configuration = getConf();
configuration.set("data.seperator", DATA_SEPERATOR);
configuration.set("hbase.table.name",TABLE_NAME);
configuration.set("COLUMN_FAMILY_1",COLUMN_FAMILY_1);
configuration.set("COLUMN_FAMILY_2",COLUMN_FAMILY_2);
Job job = new Job(configuration);
job.setJarByClass(HBaseBulkLoadDriver.class);
job.setJobName("Bulk Loading HBase Table::"+TABLE_NAME);
job.setInputFormatClass(TextInputFormat.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapperClass(HBaseBulkLoadMapper.class);
FileInputFormat.addInputPaths(job, args[0]);
FileSystem.getLocal(getConf()).delete(new Path(outputPath), true);
FileOutputFormat.setOutputPath(job, new Path(outputPath));
job.setMapOutputValueClass(Put.class);
HFileOutputFormat.configureIncrementalLoad(job, new HTable(configuration,TABLE_NAME));
job.waitForCompletion(true);
if (job.isSuccessful()) {
HBaseBulkLoad.doBulkLoad(outputPath, TABLE_NAME);
} else {
result = -1;
}
return result;
}