3
votes

We currently import data to HBase tables via Spark RDDs (pyspark) by using saveAsNewAPIHadoopDataset().

Is this function using the HBase bulk loading feature via mapreduce? In other words, would saveAsNewAPIHadoopDataset(), which imports directly to HBase, be equivalent to using saveAsNewAPIHadoopFile() to write Hfiles to HDFS, and then invoke org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles to load to HBase?

Here is an example snippet of our HBase loading routine:

conf = {"hbase.zookeeper.quorum": config.get(gethostname(),'HBaseQuorum'),
        "zookeeper.znode.parent":config.get(gethostname(),'ZKznode'),
        "hbase.mapred.outputtable": table_name,
        "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",
        "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
        "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

spark_rdd.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)
1

1 Answers

2
votes

Not exactly. RDD.saveAsNewAPIHadoopDataset and RDD.saveAsNewAPIHadoopFile do almost the same thing. Their API is just a little different. Each provides a different 'mechanism vs policy' choice.