I am working with pyspark connected to an AWS instance (r5d.xlarge 4 vCPUs 32 GiB) running a data base 25 GB, when I run some tables I got the error:
Py4JJavaError: An error occurred while calling o57.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.OutOfMemoryError: GC overhead limit exceeded
I tried to find out the error for myself but unfortunately the is not much information regarding this issue.
code
from pyspark.sql import SparkSession
spark = SparkSession.builder.master('local').\
config('spark.jars.packages', 'mysql:mysql-connector-java:5.1.44').\
appName('test').getOrCreate()
df = spark.read.format('jdbc').\
option('url', 'jdbc:mysql://xx.xxx.xx.xxx:3306').\
option('driver', 'com.mysql.jdbc.Driver').\
option('user', 'xxxxxxxxxxx').\
option('password', 'xxxxxxxxxxxxxxxxxxxx').\
option('dbtable', 'dbname.tablename').\
load()
df.printSchema()
here I get the printSchema but then:
df_1 = df.select(['col1', 'col2', 'col3', 'col4',
'col4', 'col5', 'col6']).show()
Py4JJavaError: An error occurred while calling o57.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task
in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
0.0 (TID 0, localhost, executor driver): java.lang.OutOfMemoryError: GC
overhead limit exceeded
Anybody an idea how can I solve this problem?
spark.executor.memory
andspark.driver.memory
? – cronoikdf.rdd.getNumPartitions()
... since you are reading from JDBC you will only get 1 partition hence you need to create a row boundary so the data can be split and distributed ... right now you are trying to process 25GB on a single machine with no parallelism hence OOM error – thePurplePython