1
votes

I have to import > 400 million rows from a MySQL table(having a composite primary key) into a PARTITIONED Hive table Hive via Sqoop. The table has data for two years with a column departure date ranging from 20120605 to 20140605 and thousands of records for one day. I need to partition the data based on the departure date.

The versions :

Apache Hadoop - 1.0.4

Apache Hive - 0.9.0

Apache Sqoop - sqoop-1.4.2.bin__hadoop-1.0.0

As per my knowledge, there are 3 approaches:

  1. MySQL -> Non-partitioned Hive table -> INSERT from Non-partitioned Hive table into Partitioned Hive table
  2. MySQL -> Partitioned Hive table
  3. MySQL -> Non-partitioned Hive table -> ALTER Non-partitioned Hive table to add PARTITION

    1. is the current painful one that I’m following

    2. I read that the support for this is added in later(?) versions of Hive and Sqoop but was unable to find an example

    3. The syntax dictates to specify partitions as key value pairs – not feasible in case of millions of records where one cannot think of all the partition key-value pairs 3.

Can anyone provide inputs for approaches 2 and 3?

3
as of sqoop 1.4.3 you are stuck with #1. I dont think that #2 or #3 are possible as of now. You could write a MR job and directly work with sqoop metastore to implement #3 but it would not be pretty.Jit B

3 Answers

0
votes

I guess you can create a hive partitioned table.

Then write the sqoop import code for it.

for example:

sqoop import --hive-overwrite --hive-drop-import-delims --warehouse-dir "/warehouse" --hive-table \ --connect jdbc< mysql path>/DATABASE=xxxx\ --table --username xxxx --password xxxx --num-mappers 1 --hive-partition-key --hive-partition-value --hive-import \ --fields-terminated-by ',' --lines-terminated-by '\n'

0
votes

You have to create a partitioned table structure first, before you move your data to table into partitioned table. While sqoop, no need to specify --hive-partition-key and --hive-partition-value, use --hcatalog-table instead of --hive-table.

Manu

0
votes

If this is still something people wanted to understand, they can use

sqoop import --driver <driver name> --connect <connection url> --username <user name> -P --table employee  --num-mappers <numeral> --warehouse-dir <hdfs dir> --hive-import --hive-table table_name --hive-partition-key departure_date --hive-partition-value $departure_date

Notes from the patch:

sqoop import [all other normal command line options] --hive-partition-key ds --hive-partition-value "value"

Some limitations:

  • It only allows for one partition key/value
  • hardcoded the type for the partition key to be a string
  • With auto partitioning in hive 0.7 we may want to adjust this to just have one command line option for the key name and use that column from the db table to partition.