I have been reading for few weeks for different approaches for ML in production. I decided to test Kubeflow and I decided to test it on GCP. I started to deploy Kubeflow on GCP using the guiidline on official kubeflow website(here https://www.kubeflow.org/docs/gke/). I run into a lot of issues and it was quit hard to fix them. I started to look into a better approach and I noticed that GCP AI platform now offers deploying Kubeflow pipelines with just few simple steps. (https://cloud.google.com/ai-platform/pipelines/docs/connecting-with-sdk.)
After easily setting up this, I had few question and doubts. If it is this much easy to set up and deploy Kubeflow why we have to go through such a cumbersome way as suggested in the kubeflow official website. Since creating Kubeflow pipeline on GCP means basically I am deploying Kubeflow on GCP, does that mean I can access other Kubeflow services like Katib?
Elnaz