1
votes

I execute Spark SQL reading from Hive Tables and it is lengthy in execution(15 min). I am interested in optimizing the query execution so I am asking about if the execution for those queries uses the execution engine of Hive and by this way it is similar to executing the queries in Hive editor, or Spark use the Hive Metastore only to know the locations of the files and deals with the files after that directly?

import os
import findspark
findspark.init()
from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .master("yarn") \
    .appName("src_count") \
    .config('spark.executor.cores','5') \
    .config('spark.executor.memory','29g') \
    .config('spark.driver.memory','16g') \
    .config('spark.driver.maxResultSize','12g')\
    .config("spark.dynamicAllocation.enabled", "true")\
    .config("spark.shuffle.service.enabled", "true")\
    .getOrCreate()
sql = "SELECT S.SERVICE, \
       COUNT(DISTINCT CONTRACT_KEY) DISTINCT_CNT, \
       COUNT(*) CNT ... "
df.toPandas()
2
why you want to convert it into pandas dataframe? is there any specific need for that? - vikrant rana
Spark sql engine uses hive in general even if you dont directly work with hive - Ilya Brodezki
@vikrantrana it is aggregation query and return limited number of records less than 20 record. - Ahmed Gamal
@IlyaBrodezki does it use it as a metastore only and execute it as sort of RDD or DataFrames for example, or use Hive server for the execution as if I am executing it in the Hive editor? - Ahmed Gamal
you can use spark in-built functions to improve the performance.. choose pandas or python function if something cannot be done using spark inbuilt functions.. - vikrant rana

2 Answers

1
votes

You can read the HIVE table as follows:

  1. Read Entire HIVE Table

df = spark.table(<HIVE_DB>.<HIVE_TBL>)

  1. You can read the partial table based on SQL query.

df = spark.sql(<YOUR_SQL_Query>)

Also, in your question you are trying to convert the Spark DataFrame to Python DataFrame which is not recommended. Because, in this case you are sending all data from worker to driver which trafer lof of data across the network and slowdown the application and also your driver will be overloaded since it will have entire dataset and it may go to OOM as well.

-1
votes

Thanks all for your comments :)

After some trials I found using spark.table give me more control on writing lengthy sql statements which benefit in in troubleshoot and optimizing its execution×¥