What is the equivalent of the following pySpark code in Spark-Scala?
rddKeyTwoVal = sc.parallelize([("cat", (0,1)), ("spoon", (2,3))])
rddK2VReorder = rddKeyTwoVal.map(lambda (key, (val1, val2)) : ((key, val1) ,
val2))
rddK2VReorder.collect()
// [(('cat', 0), 1), (('spoon', 2), 3)] -- This is the output.