I am reading sas file from azure blob . Converting it to csv and trying to upload csv to azure blob . However for small files in MBs I am able to do the same successfully with the following spark scala code .
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext
import com.github.saurfang.sas.spark._
val sqlContext = new SQLContext(sc)
val df=sqlContext.sasFile("wasbs://container@storageaccount/input.sas7bdat")
df.write.format("csv").save("wasbs://container@storageaccount/output.csv");
But for large files in GB it gives me Analysis exception wasbs://container@storageaccount/output.csv file already exists exception. I have tried overwrite also . But no luck . Any help would be appriciated