I propose you to use Cloud Build. It's not the most obvious solution, but it's serverless and cheap. Perfect for your 1 time use case. here what I propose to perform
steps:
- name: 'gcr.io/cloud-builders/gsutil'
entrypoint: "bash"
args:
- -c
- |
# copy all your files locally
gsutil -m cp gs://311_nyc/311* .
# Uncompress your file
# I don't know your compression method? gunzip?
# append your file in a merged file. Delete the files after the merge.
for file in $(ls -1 311* ); do cat $file >> merged; rm $file; done
# Copy the file to the destination bucket
gsutil cp merged gs://myDestinationBucket/myName.csv
options:
# Use 1Tb of disk for getting all the files in the same time on the same server.
# I didn't understand is the 10Gb is per uncompressed file or the total size.
# If it's the total file size, I think that this option is useless
diskSizeGb: 1000
# Optionally extend the default 10 minutes timeout if it takes too much time.
timeout: 660s