I'm confused how many connections would be made to the database by Spark in the below scenario:
Let's say I have a Spark program which is running only on one worker node with one executor, and the number of partitions in a dataframe is 10. I want to write this dataframe to Teradata. Since the level of parallelism is 10 but the executor is only 1, will there be 10 connections made while saving the data, or only 1 connection?