Condition is: Column names starting with Data-C are StringType columns, Data-D are DateType columns and Data-N are DoubleType columns. I have dataframe in which all column's datatype is a string so I am trying to update their datatype in such way that:
import org.apache.spark.sql.functions._
import sparkSession.sqlContext.implicits._
val diff_set = Seq("col7", "col8", "col15", "Data-C-col1", "Data-C-col3", "Data-N-col2", "Data-N-col4", "Data-D-col16", "Data-D-col18", "Data-D-col20").toSet
var df = (1 to 10).toDF
df = df.select(df.columns.map(c => col(c).as(c)) ++ diff_set.map(c => lit(null).cast("string").as(c)): _*)
df.printSchema()
// This foreach loop yields slow performance
df.columns.foreach(x => {
if (x.startsWith("Data-C")) {
df = df.withColumn(x, col(x).cast(StringType))
} else if (x.startsWith("Data-D")) {
df = df.withColumn(x, col(x).cast(DateType))
} else if (x.startsWith("Data-N")) {
df = df.withColumn(x, col(x).cast(DoubleType))
}
}
)
df.printSchema()
can this be done more elegantly and efficiently(performance wise) in scala-spark?