5
votes

I want to load a model which is saved as a joblib file from Google Cloud Storage bucket. When it is in local path, we can load it as follows (considering model_file is the full path in system):

loaded_model = joblib.load(model_file)

How can we do the same task with Google Cloud Storage?

4

4 Answers

0
votes

I don't think that's possible, at least in a direct way. I though about a workaround, but the might not be as efficient as you want.

By using the Google Cloud Storage client libraries [1] you can download the model file first, load it, and when your program ends, delete it. Of course, this means that you need to download the file every time you run the code. Here is a snippet:

from google.cloud import storage
from sklearn.externals import joblib

storage_client = storage.Client()
bucket_name=<bucket name>
model_bucket='model.joblib'
model_local='local.joblib'

bucket = storage_client.get_bucket(bucket_name)
#select bucket file
blob = bucket.blob(model_bucket)
#download that file and name it 'local.joblib'
blob.download_to_filename(model_local)
#load that file from local file
job=joblib.load(model_local)
10
votes

For anyone googling around for an answer to this. Here are two more options besides the obvious, to use Google AI platform for model hosting (and online predictions).

Option 1 is to use TemporaryFile like this:

from google.cloud import storage
from sklearn.externals import joblib
from tempfile import TemporaryFile

storage_client = storage.Client()
bucket_name=<bucket name>
model_bucket='model.joblib'

bucket = storage_client.get_bucket(bucket_name)
#select bucket file
blob = bucket.blob(model_bucket)
with TemporaryFile() as temp_file:
    #download blob into temp file
    blob.download_to_file(temp_file)
    temp_file.seek(0)
    #load into joblib
    model=joblib.load(temp_file)
#use the model
model.predict(...)

Option 2 is to use BytesIO like this:

from google.cloud import storage
from sklearn.externals import joblib
from io import BytesIO

storage_client = storage.Client()
bucket_name=<bucket name>
model_bucket='model.joblib'

bucket = storage_client.get_bucket(bucket_name)
#select bucket file
blob = bucket.blob(model_bucket)
#download blob into an in-memory file object
model_file = BytesIO()
blob.download_to_file(model_file)
#load into joblib
model=joblib.load(model_local)
3
votes

Alternate answer as of 2020 using tf2, you can do this:

import joblib
import tensorflow as tf

gcs_path = 'gs://yourpathtofile'

loaded_model = joblib.load(tf.io.gfile.GFile(gcs_path, 'rb'))
0
votes

For folks who are Googling around with this problem - here's another option. The open source modelstore library is a wrapper that deals with the process of saving, uploading, and downloading models from Google Cloud Storage.

Under the hood, it saves scikit-learn models using joblib, creates a tar archive with the files, and up/downloads them from a Google Cloud Storage bucket using blob.upload_from_file() and blob.download_to_filename().

In practice it looks a bit like this (a full example is here):

# Create  modelstore instance
from modelstore import ModelStore

ModelStore.from_gcloud(
   os.environ["GCP_PROJECT_ID"], # Your GCP project ID
   os.environ["GCP_BUCKET_NAME"], # Your Cloud Storage bucket name
)

# Train and upload a model (this currently works with 9 different ML frameworks)
model = train() # Replace with your code to train a model
meta_data = modelstore.sklearn.upload("my-model-domain", model=model)

# ... and later when you want to download it
model_path = modelstore.download(
  local_path="/path/to/a/directory",
  domain="my-model-domain",
  model_id=meta_data["model"]["model_id"],
)

The full documentation is here.