0
votes

I have a requirement, I need to write a spark job to connect in Prod(Source-Hive)Server A and get the data into Local(Temp hive server) do the transform and load it back into TargetProd(Server B)

In earlier cases, we have our Target DB as Oracle, so we use to give like below, which will overwrite the table

AAA.write.format("jdbc").option("url", "jdbc:oracle:thin:@//uuuuuuu:0000/gsahgjj.yyy.com").option("dbtable", "TeST.try_hty").option("user", "aaaaa").option("password", "dsfdss").option("Truncate","true").mode("Overwrite").save().

In terms of SPARK overwrite from Server A to B, what should be syntax we need to give.

when I try to establish the connection through jdbc from one hive(ServerA) to Server B. It is not working.. please help.

1
You don't connect to Hive using JDBC from Spark. You can only connect to a single Hive metastore, as far as I know. You can use Hive export and import features to move data between Hive servers - OneCricketeer

1 Answers

0
votes

You can connect to hive by using jdbc if it’s a remote one. Please get your hive thrift server url and port details and connect via jdbc. It should work.