I am using the Spark BigQuery connector to query tables and views from a Dataproc cluster, what I saw is that when requesting a view the cache is not used, the connector creates a new temporary table for each view read:
df = spark.read.format('bigquery').option('table', view_name).option('viewsEnabled', 'true').load()
it's not the case when I read from a table, the cache here is used:
df = spark.read.format('bigquery').option('table', table).load()
Thank you