Data Looks Like:
col 1 col 2 col 3 col 4
row 1 row 1 row 1 row 1
row 2 row 2 row 2 row 2
row 3 row 3 row 3 row 3
row 4 row 4 row 4 row 4
row 5 row 5 row 5 row 5
row 6 row 6 row 6 row 6
Problem: I want to partition this data, lets say row 1 and row 2 will be processed as one partition, row 3 and row 4 as another, row 5 and row 6 as another and create a JSON data merging them together with the column (column headers with data values in rows).
Output should be like:
[
{col1:row1,col2:row1:col3:row1:col4:row1},
{col1:row2,col2:row2:col3:row2:col4:row2},
{col1:row3,col2:row3:col3:row3:col4:row3},
{col1:row4,col2:row4:col3:row4:col4:row4},...
]
I tried using repartion(num) available in spark but it is not exactly partitioning as i want. therefore the JSON data generated is not valid. i had issue with why my program was taking same time for processing the data even though i was using different number of cores which can be found here and the repartition suggestion was suggested by @Patrick McGloin . The code mentioned in that problem is something i am trying to do.
partition
because you can generate your JSON from a single RDD without particular concern for partitioning. If you know the array position that your keys need to be applied to, you can use scala'szip
in an RDDmap
call. – Alister Leerepartition
should get you what you want. Make sure you have enough workers/executors in your cluster to utilise your cores otherwise the partitions will run sequentially anyway. On the other hand, if you need to process a particular subset of the rows together, then I agree with @Lukasz Tracewski below, but you may be able to usegroupByKey
which would be simpler. – Alister Lee