1
votes

Trying to replace null with 0 in the Dataframe using the UDF below. Where I could be going wrong, the code seems straight forward but it's not working as expected.

I tried to create a UDF which replaces 0 in any column whose value is null.

Thank you All in Advance.

//imports

object PlayGround {
def missingValType2(n: Int):Int = {
    if(n == null){
      0
    }else{
      n
    }
  }

   def main(args: Array[String]): Unit = {

    Logger.getLogger("org").setLevel(Level.ERROR)
    val spark = SparkSession
      .builder()
      .appName("PlayGround")
      .config("spark.sql.warehouse.dir", "file:///C:/temp")
      .master("local[*]")
      .getOrCreate()

    val missingValUDFType2 = udf[Int, Int](missingValType2)

     val schema = List(
      StructField("name", types.StringType, false),
      StructField("age", types.IntegerType, true)
    )

    val data = Seq(
      Row("miguel", null),
      Row("luisa", 21)
    )
    val df = spark.createDataFrame(
      spark.sparkContext.parallelize(data),
      StructType(schema)
    )
    df.show(false)
    df.withColumn("ageNullReplace",missingValUDFType2($"age")).show()

  }
}

/**
  * +------+----+
  * |name  |age |
  * +------+----+
  * |miguel|null|
  * |luisa |21  |
  * +------+----+
  *
  * Below is the current output.
  * +------+----+--------------+
  * |  name| age|ageNullReplace|
  * +------+----+--------------+
  * |miguel|null|          null|
  * | luisa|  21|            21|
  * +------+----+--------------+*/

Expected output:

 * +------+----+--------------+
  * |  name| age|ageNullReplace|
  * +------+----+--------------+
  * |miguel|null|             0|
  * | luisa|  21|            21|
  * +------+----+--------------+
2
Why are you trying to use a UDF? you can just use when within your withColumn to do the same. UDF's are not recommended if you have a native function that can do the sameAaron
Hello, @user2315840 thanks for responding, yes I could've done that. I read this mungingdata.com/apache-spark/dealing-with-null section User Defined Functions, and I thought its not advisable. Let me try your solution quickly.Pavan_Obj

2 Answers

2
votes

There is no need for a UDF. You can apply na.fill to a list of type-specific columns in the DataFrame, as shown below:

import org.apache.spark.sql.functions._
import spark.implicits._

val df = Seq(
  ("miguel", None), ("luisa", Some(21))
).toDF("name", "age")

df.na.fill(0, Seq("age")).show
// +------+---+
// |  name|age|
// +------+---+
// |miguel|  0|
// | luisa| 21|
// +------+---+
1
votes

you can use WithColumn with a when condition like below Code is not tested

df.withColumn("ageNullReplace", when(col("age").isNull,lit(0)).otherwise(col(age)))

in the above code Otherwise is not required just FYI

Hope that helps