I have the following json of format:
{"year":"2020", "id":"1", "fruit":"Apple","cost": "100" }
{"year":"2020", "id":"2", "fruit":"Kiwi", "cost": "200"}
{"year":"2020", "id":"3", "fruit":"Cherry", "cost": "300"}
{"year":"2020", "id":"4", "fruit": "Apple","cost": "400" }
{"year":"2020", "id":"5", "fruit": "Mango", "cost": "500"}
{"year":"2020", "id":"6", "fruit": "Kiwi", "cost": "600"}
Its of type: pyspark.sql.dataframe.DataFrame
How can I split this json file into multiple json files and save it in a year
directory using Pyspark
? like:
directory: path.../2020/<all split json files>
Apple.json
{"year":"2020", "id":"1", "fruit":"Apple","cost": "100" }
{"year":"2020", "id":"4", "fruit": "Apple","cost": "400" }
Kiwi.json
{"year":"2020", "id":"2", "fruit":"Kiwi", "cost": "200"}
{"year":"2020", "id":"6", "fruit": "Kiwi", "cost": "600"}
Mango.json
{"year":"2020", "id":"5", "fruit": "Mango", "cost": "500"}
Cherry.json
{"year":"2020", "id":"3", "fruit":"Cherry", "cost": "300"}
Also if I encounter a different year, how do push the files in similar way like: path.../2021/<all split json files>
?
Initially I tried, finding all the unique fruits and create a list. Then tried creating multiple data frames & pushing the json values into it. Then converted every dataframe into a json format. But I find this inefficient.
Then I also checked this link. But issue here is it creates a key value pair in dict form, which is slightly different.
Then I also learned about Pyspark groupBy method. It seems to make sense because I could groupBy() the fruit values and then split the json file, but I feel I am missing something.