1
votes

Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ?

edit: Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost.

  1. During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib.

  2. I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib

  3. The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?

3
So, you want to train the XGBoost model using spark mllib or sklearn.Sarath Chandra Vema
Edited the question. Do check.Anjala Abdurehman
you can use spark as an orchestration system to both train and predict sklearn models via spark-sklearn module. it will push iterations of each model to different spark executors.thePurplePython

3 Answers

0
votes

You can run your python script on spark using spark-submit command so that can compile your python code on spark and then you can predict the value in spark.

0
votes

you can

  1. load data/ munge data using pyspark sql,
  2. then bring data to local driver using collect/topandas(performance bottleneck)
  3. then train xgboost on local driver
  4. then prepare test data as RDD,
  5. broadcast the xgboost model to each RDD partition, then predict data in parallel

This all can be in one script, you spark-submit, but to make the things more concise, i will recommend split train/test in two script.

Because step2,3 are happening at driver level, not using any cluster resource, your worker are not doing anything

0
votes

Here is a similar implementation of what you are looking for. I have a SO post explaining details as I am trying to troubleshoot the errors described in the post to get the code in the notebook working .

XGBoost Spark One Model Per Worker Integration

The idea is to train using xgboost and then via spark orchestrate each model to run on a spark worker and then predictions can be applied via xgboost predict_proba() or spark ml predict().