0
votes

I'm looking for way how to provide temporary access to "folder" inside Google Cloud Storage.

My problem: I have buckets with a lots of "folders" (I know that folder don't exists on GCS).

\Bucket
|-Folder1
 \-File2.csv
 |-File3.csv
 |-File4.csv
|-Folder2
 \-File5.csv
 |-File6.csv

And I want to give users a temporary access (something with automatic expiration) for reading Bucket/Folder1/*, so user will be able to access whole directory "Folder1" and download all files inside this directory (File2.csv, File3.csv, File4.csv).

I found signedUrls https://cloud.google.com/storage/docs/gsutil/commands/signurl, but I did not found the way how to provide access for whole Folder1 directory.

I was able to run same scenario on AWS S3 with following policy:

{
  "Effect": "Allow",
  "Action": "s3:ListBucket",
  "Resource": [
    "arn:aws:s3:::name-of-a-bucket"
  ],
  "Condition": {
    "StringLike": {
      "s3:prefix": "Folder1/*"
    }
  }
}

Is there any way how to provide temporary access for multiples files ?

2
What about using the GCS APIs? I believe that we can change ACLs on a folder using the API. We could then write a job which grants access and schedule a second job to execute after a period of time which removes access?Kolban
@Kolban I have to provide hundreds of these credentials per second. I'm not sure if ACL is designed for scenario like this. Inside cloud.google.com/storage/quotas I've found sentence There is a limit of 100 access control list entries (ACLs) per object. I will have to create some kind of caching mechanism, because on file can be accessed more than 100 times per expiration period (12 hours in my case). In "the worst case scenario" I have to to do that ... but I hope there's somethnig inside Google Cloud which already solved this problem :-)yustme

2 Answers

1
votes

You can specify many objects in your bucket by using the * wildcard. Using it while specifying the subdirectory you are targeting will identify all objects under it. For example, if you are trying to access objects in Folder1 in your Bucket, you would use:

gs://Bucket/Folder1/*

You will then access File2.csv, File3.csv and File4.csv.

To grant temporary access to your bucket objects by using signed URLs, you can use the -d flag to specify the duration of the availability of your objects with the following suffixes: - s for seconds - m for minutes - h for hours - d for days

Remember those details: - Not specifying the -d flag will set the duration to 1 hour. - Specifying no suffix sets the duration in hours. - 7 days is the maximum duration.

The following would create a signed URL for 2 hours:

gsutil signurl -d 2 <private-key-file> gs://some-bucket/some-object/

Where using -d 2h would have done the same thing.

The following would create a signed URL for 15 minutes:

gsutil signurl -d 15m <private-key-file> gs://some-bucket/some-object/

Finally, to grant access to all your object in Folder1 for downloading, use the following the objects available for 1 hour:

gsutil signurl <private-key-file> gs://Bucket/Folder1/*

Where adding -d 1 or -d 1h would have done the same thing.

The private key file is a JSON or P12 format file that gets created when you create your key. You should choose a location of your liking to store it, then replace with its path when running the command to create the signed URLs.

0
votes

Using signed url, the way I got it was listing all the files and then generating the url

For example:

const options = {
    prefix: 'foldeName',
};

const optionsBucket = {
    version: 'v2',
    action: 'read',
    expires: Date.now() + 1000 * 60 * 9, // 9 minutes
};

const [files] = await storage.bucket('your-bucket-name').getFiles(options);

for (let nota of files) {
    let fileName = nota.name;

    const [url] = await storage
        .bucket(bucketName)
        .file(fileName)
        .getSignedUrl(optionsBucket);

... continue your code