What are all these options : spark.read.jdbc
refers to reading a table from RDBMS.
parallelism is power of spark, in order to achieve this you have to mention all these options.
Question[s] :-)
1) The documentation seems to indicate that these fields are optional. What happens if I don't provide them ?
Answer : default Parallelism or poor parallelism
Based on scenario developer has to take care about the performance tuning strategy. and to ensure data splits across the boundaries (aka partitions) which in turn will be tasks in parallel. By seeing this way.
2) How does Spark know how to partition the queries? How efficient will that be?
jdbc-reads -referring to databricks docs
You can provide split boundaries based on the dataset’s column values.
- These options specify the parallelism on read.
- These options must all be specified if any of them is specified.
Note
These options specify the parallelism of the table read. lowerBound
and upperBound
decide the partition stride, but do not
filter the rows in table. Therefore, Spark partitions and returns all
rows in the table.
Example 1:
You can split the table read across executors on the emp_no
column using the partitionColumn
, lowerBound
, upperBound
, and numPartitions
parameters.
val df = spark.read.jdbc(url=jdbcUrl,
table="employees",
columnName="emp_no",
lowerBound=1L,
upperBound=100000L,
numPartitions=100,
connectionProperties=connectionProperties)
also numPartitions
means number of parllel connections you are asking RDBMS to read the data. if you are providing numPartitions then you are limiting number of connections... with out exhausing the connections at RDBMS side..
Example 2 source : datastax presentation to load oracle data in cassandra :
val basePartitionedOracleData = sqlContext
.read
.format("jdbc")
.options(
Map[String, String](
"url" -> "jdbc:oracle:thin:username/password@//hostname:port/oracle_svc",
"dbtable" -> "ExampleTable",
"lowerBound" -> "1",
"upperBound" -> "10000",
"numPartitions" -> "10",
"partitionColumn" -> “KeyColumn"
)
)
.load()
The last four arguments in that map are there for the purpose of getting a partitioned dataset. If you pass any of them,
you have to pass all of them.
When you pass these additional arguments in, here’s what it does:
It builds a SQL statement template in the format
SELECT * FROM {tableName} WHERE {partitionColumn} >= ? AND
{partitionColumn} < ?
It sends {numPartitions
} statements to the DB engine. If you suppled these values: {dbTable=ExampleTable,
lowerBound
=1, upperBound
=10,000, numPartitions
=10, partitionColumn
=KeyColumn}, it would create these ten
statements:
SELECT * FROM ExampleTable WHERE KeyColumn >= 1 AND KeyColumn < 1001
SELECT * FROM ExampleTable WHERE KeyColumn >= 1001 AND KeyColumn < 2000
SELECT * FROM ExampleTable WHERE KeyColumn >= 2001 AND KeyColumn < 3000
SELECT * FROM ExampleTable WHERE KeyColumn >= 3001 AND KeyColumn < 4000
SELECT * FROM ExampleTable WHERE KeyColumn >= 4001 AND KeyColumn < 5000
SELECT * FROM ExampleTable WHERE KeyColumn >= 5001 AND KeyColumn < 6000
SELECT * FROM ExampleTable WHERE KeyColumn >= 6001 AND KeyColumn < 7000
SELECT * FROM ExampleTable WHERE KeyColumn >= 7001 AND KeyColumn < 8000
SELECT * FROM ExampleTable WHERE KeyColumn >= 8001 AND KeyColumn < 9000
SELECT * FROM ExampleTable WHERE KeyColumn >= 9001 AND KeyColumn < 10000
And then it would put the results of each of those queries in its own partition in Spark.
Question[s] :-)
If I DO specify these options, how do I ensure that the partition
sizes are roughly even if the partitionColumn is not evenly
distributed?
Will my 1st and 20th executors get most of the work, while the other
18 executors sit there mostly idle?
If so, is there a way to prevent this?
All questions has one answer
Below is the way...
1) You need to understand how many number of records/rows per partition.... based on this you can repartition
or coalesce
Snippet 1: Spark 1.6 >
spark 2.x provides facility to know how many records are there in the partition.
spark_partition_id()
exists in org.apache.spark.sql.functions
import org.apache.spark.sql.functions._
val df = "<your dataframe read through rdbms.... using spark.read.jdbc>"
df.withColumn("partitionId", spark_partition_id()).groupBy("partitionId").count.show
Snippet 2 : for all version of spark
df
.rdd
.mapPartitionsWithIndex{case (i,rows) => Iterator((i,rows.size))}
.toDF("partition_number","NumberOfRecordsPerPartition")
.show
and then you need to incorporate your strategy again query tuning between ranges or repartitioning etc.... , you can use mappartitions or foreachpartitions
Conclusion : I prefer using given options which works on number columns since I have seen it was dividing data in to uniform across
bounderies/partitions.
Some time it may not be possible to use these option then manually
tuning the partitions/parllelism is required...
Update :
With the below we can achive uniform distribution...
- Fetch the primary key of the table.
- Find the key minimum and maximum values.
- Execute Spark with those values.
def main(args: Array[String]){
// parsing input parameters ...
val primaryKey = executeQuery(url, user, password, s"SHOW KEYS FROM ${config("schema")}.${config("table")} WHERE Key_name = 'PRIMARY'").getString(5)
val result = executeQuery(url, user, password, s"select min(${primaryKey}), max(${primaryKey}) from ${config("schema")}.${config("table")}")
val min = result.getString(1).toInt
val max = result.getString(2).toInt
val numPartitions = (max - min) / 5000 + 1
val spark = SparkSession.builder().appName("Spark reading jdbc").getOrCreate()
var df = spark.read.format("jdbc").
option("url", s"${url}${config("schema")}").
option("driver", "com.mysql.jdbc.Driver").
option("lowerBound", min).
option("upperBound", max).
option("numPartitions", numPartitions).
option("partitionColumn", primaryKey).
option("dbtable", config("table")).
option("user", user).
option("password", password).load()
// some data manipulations here ...
df.repartition(10).write.mode(SaveMode.Overwrite).parquet(outputPath)
}