1
votes

I have some data in hbase( hdfs) table and I copied it to my local file system. Then on my second machine, I use copyFromLocal hadoop command to copy data from local to hdfs. Now when I run command "list" in hbase (on second machine). It is shows that there is no table. I copied table in a directory in hdfs that is data directory of hbase so this table should be appeared in hbase.

Where is the problem? On both machine, version of hbase and hadoop are same. How I can copy hbase table from one cluster to second cluster?

2
on second machine you mentioned is it standalone setup? If yes then simply copying to hdfs will only update hdfs metadata entry. for Hbase you will need to create a table and import data to it.Rajen Raiyarela

2 Answers

1
votes

There are a few tools already available for managing such tasks (all documented here: http://hbase.apache.org/book/ops_mgt.html).


  1. HBase copyTable tool

http://hbase.apache.org/book/ops_mgt.html#copytable

$ ./bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --help        
/bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --help
Usage: CopyTable [general options] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] <tablename>

Options:
 rs.class     hbase.regionserver.class of the peer cluster, 
              specify if different from current cluster
 rs.impl      hbase.regionserver.impl of the peer cluster,
 startrow     the start row
 stoprow      the stop row
 starttime    beginning of the time range (unixtime in millis)
              without endtime means from starttime to forever
 endtime      end of the time range.  Ignored if no starttime specified.
 versions     number of cell versions to copy
 new.name     new table's name
 peer.adr     Address of the peer cluster given in the format
              hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
 families     comma-separated list of families to copy
              To copy from cf1 to cf2, give sourceCfName:destCfName.
              To keep the same name, just give "cfName"
 all.cells    also copy delete markers and deleted cells

Args:
 tablename    Name of the table to copy

Examples:
 To copy 'TestTable' to a cluster that uses replication for a 1 hour window:
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289 --peer.adr=server1,server2,server3:2181:/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable

  1. HBase export/import tools

http://hbase.apache.org/book/ops_mgt.html#export

http://hbase.apache.org/book/ops_mgt.html#import

a) Export the data

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]

b) scp the data to the remote machine

c) Import the data

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

  1. Use snapshots

Recommended for HBase 0.94.6+. You can find all the info here: http://hbase.apache.org/book/ops.snapshots.html

0
votes

I have to add some informations. Run follwoing command if you copy your table by hadoop command instead of hbase commands.(Assume versions are same). Basically data is in hdfs but no infomation in.meta files. So following will do the the job.

bin/hbase hbck -repairHoles

But remember that if you use this method for hbase table backup,there is a possiblity that some data may be inconsistent.