Usually, a Cloud SQL database is up full time to serve request anytime. It's not a serverless product that can start when a request comes in. You can have a look to the pricing page to calculate the cost (mainly: CPU, Memory and Storage. Size database according to your usage and expected performances)
About the process, we did that in my previous company:
- Use a cloud scheduler to trigger a Cloud Functions
- Create temporary table in BigQuery
- Export BigQuery temporary tables to CSV in Cloud Storage
- Run a Cloud SQL import of the files from GCS in temporary tables
- Run a request in database to merge the imported data to the existing one, and to delete the table of imported data
If it takes too much time to perform that in only one functions, you can use Cloud Run (60 minutes of time out), or a dispatch functions. This functions is called by the Cloud Scheduler and will publish a message in PubSUb for each table to process. On PubSub, you can plug a Cloud Functions (or a Cloud Run) that will perform the previous process only on the table mentioned in the message. Like that, you process concurrently all the tables and not sequentially.
About cost you will pay
- BigQuery query (volume of data that you process to create temporary tables)
- BigQuery storage (very low, you can create temporary table that expire (automatically deleted) after 1h)
- Cloud Storage storage (very low, you can set a lifecycle on the file, to delete them after few days)
- File transfer: free if you stay in the same region.
- Export and import: free
In summary, only the BigQuery query and the Cloud SQL instance are major costs.