I am not sure from documentation if when creating an Hive table using HiveContext from Spark, will it use the Spark engine or the standard Hive mapreduce job to perform the task?
val sc = new SparkContext()
val hc = new HiveContext(sc)
hc.sql("""
CREATE TABLE db.new_table
STORED AS PARQUET
AS SELECT
field1,
field2,
field3
FROM db.src1
JOIN db.src2
ON (x = y)
"""
)