Pyspark allows you to create a Dictionary when a single a single row is returned from the dataframe using the below approach.
t=spark.sql("SET").withColumn("rw",expr("row_number() over(order by key)")).collect()[0].asDict()
print(t)
print(t["key"])
print(t["value"])
print(t["rw"])
print("Printing using for comprehension")
[print(t[i]) for i in t ]
Results:
{'key': 'spark.app.id', 'value': 'local-1594577194330', 'rw': 1}
spark.app.id
local-1594577194330
1
Printing using for comprehension
spark.app.id
local-1594577194330
1
I'm trying the same in scala-spark. It is possible using case class approach.
case class download(key:String, value:String,rw:Long)
val t=spark.sql("SET").withColumn("rw",expr("row_number() over(order by key)")).as[download].first
println(t)
println(t.key)
println(t.value)
println(t.rw)
Results:
download(spark.app.id,local-1594580739413,1)
spark.app.id
local-1594580739413
1
In actual problem, I have nearly 200+ columns and don't want to use case class approach. I'm trying something like below to avoid the case class option.
val df =spark.sql("SET").withColumn("rw",expr("row_number() over(order by key)"))
(df.columns).zip(df.take(1)(0))
but getting error.
<console>:28: error: type mismatch;
found : (String, String, Long)
required: Iterator[?]
(df.columns.toIterator).zip(df.take(1)(0))
Is there a way to solve this.
tuple.productIterator- user