I'm using Spark's python api.
I have a big text which I load with rdd = sc.loadtxt("file.txt") .
After that, I want to perform a mapPartitions transformation on the rdd.
However, I get access the each line of the text file in each partition only with a python iterator.
This is not the way I prefer to use the data and it costs in my app performance.
Is there some other ways for getting access to that text file on each partition?
For example : Getting it like a real txt file, 1 string where lines are seperated by \n ..