Let's define a Spark pipeline that assembles a few columns together and then applies feature hashing:
val df = sqlContext.createDataFrame(Seq((0.0, 1.0, 2.0), (3.0, 4.0, 5.0))).toDF("colx", "coly", "colz")
val va = new VectorAssembler().setInputCols(Array("colx", "coly", "colz")).setOutputCol("ft")
val hashIt = new HashingTF().setInputCol("ft").setOutputCol("ft2")
val pipeline = new Pipeline().setStages(Array(va, hashIt))
Fitting the pipeline with pipeline.fit(df)
throws:
java.lang.IllegalArgumentException: requirement failed: The input column must be ArrayType, but got org.apache.spark.mllib.linalg.VectorUDT@f71b0bce
Is there a transformer that will allow VectorAssembler
and HashingTF
to be able to work together?