I want to create a Spark dataframe by reading a Seq using Scala. The datatypes of the seq are String, Dataframe, Long and Date type.
I tried to apply the below approach but getting some error, may be it is not the right way to deal with issue.
val Total_Record_Count = TotalRecordDF.count // geting count total number by reading a dataframe
val Rejected_Record_Count = rejectDF.count // geting count total number by reading a dataframe
val Batch_Run_ID = spark.range(1).select(unix_timestamp as "current_timestamp")
case class JobRunDetails(Job_Name: String, Batch_Run_ID: DataFrame, Source_Entity_Name: String, Total_Record_Count: Long, Rejected_Record_Count: Long, Reject_Record_File_Path: String,Load_Date: String)
val inputSeq = Seq(JobRunDetails("HIT", Batch_Run_ID, "HIT", Total_Record_Count, Rejected_Record_Count, "blob.core.windows.net/feedlayer", Load_Date))
I tried
val df = sc.parallelize(inputSeq).toDF()
but it is throwing error "java.lang.UnsupportedOperationException: No Encoder found for org.apache.spark.sql.DataFrame"
I just want to create a dataframe by reading the sequence. Any help will be highly appreciated. Note:- I am using the Databricks Spark 2.3 version.