We are writing a Dataflow job to write data from JSON in storage bucket to BigQuery dataset. Both storage bucket and BigQuery dataset are in region X. However, dataflow endpoint is not available in region X. The nearest region is Y. So, I have set Dataflow job region as Y, but zone as X. So, all the compute instances are getting spun up in region X. However, still Dataflow job fails with the error:
Cannot read and write in different locations: source: Y, destination: X
Both temp location and staging location storage buckets are set to region x.
The beam version used is 2.17 and SDK is Python SDK.
We are creating a dataflow-template and running it (DataflowRunner). The template is also in region X.