7
votes

I'm learning about Firebase auth and storage in a web app. My idea asks users to login via Firebase and then upload an image.

I can see that this is possible from Firebase auth and storage. However, I would like to put limits on the file count and file-size they can upload.

Is it possible to control uploads within the Firebase console (or somewhere else)? After reviewing the JavaScript examples, I see how I can put files in, and I can imagine writing code which would query Firebase for a user's upload count, and then limit on the client side, but of course, this is a completely insecure method.

If I hosted this as a single page app on, say, GitHub pages, I am wondering if I could set these limits without involving a server. Or, do I need to proxy my uploads through a server to make sure I never allow users to upload more than I intend them to?

3
There are lots of tutorials and answers I've seen describing totally insecure methods, and the authors even assert that it's secure :(sudo

3 Answers

8
votes

You can limit what a user can upload through Firebase Storage's security rules.

For example this (from the linked docs) is a way to limit the size of uploaded files:

service firebase.storage {
  match /b/<your-firebase-storage-bucket>/o {
    match /images/{imageId} {
      // Only allow uploads of any image file that's less than 5MB
      allow write: if request.resource.size < 5 * 1024 * 1024
                   && request.resource.contentType.matches('image/.*');
    }
  }
}

But there is currently no way in these rules to limit the number of files a user can upload.

One approach that comes to mind would be to use fixed file names for that. For example, if you limit the allowed file names to be numbered 1..5, the user can only ever have five files in storage:

match /public/{userId}/{imageId} {
  allow write: if imageId.matches("[1-5]\.txt");
}
1
votes

If you need a per user storage validation the solution is a little bit more tricky, but can be done.

Ps.: You will need to generate a Firebase token with Cloud Functions, but the server won't be in the middle for the upload...

https://medium.com/@felipepastoree/per-user-storage-limit-validation-with-firebase-19ab3341492d

0
votes

One solution may be is to use Admin SDK to change Storage Rules based on a Firestore document holding the upload count per day.

Say you have a firestore collection/document as userUploads/uid having fields uploadedFiles: 0 and lastUploadedOn.

Now, once the user uploads the file to Firebase Storage (assuming within limits and no errors), you can trigger Cloud Function which will read userUploads/uid document and check the lastUploadedOn field is of an earlier date than the currently uploaded file's date and if yes then make the uploadedFiles to 1 and change the lastUploadedOn to uploaded datetime. Else, increment the uploadedFiles count and change lastUpdateOn to current datetime. Once the uploadedFiles value becomes 10 (your limit), you can change the storage rules using Admin SDK. See example here. Then, change the count to 0 in userUploads/uid document.

However, there a little caveat. The change in rules might take some time and there should be no legit async work under process for that rule. From Admin SDK:

Firebase security rules take a period of several minutes to fully deploy. When using the Admin SDK to deploy rules, make sure to avoid race conditions in which your app immediately relies on rules whose deployment is not yet complete

I haven't tried this myself but it looks like it will work. On a second thought, changing back the rules to allow write could be complicated. If the user uploads on the next day (after rules has been changed), the upload error handler can trigger another cloud function to check if it is a legit request, change the rules back to normal and attempt the upload again after sometime but it will be very bad user experience. On the other hand, if you use a scheduler cloud function to check userUploads/uid document everyday and reset values, it could be costly (~$18 per million users per month @ $0.06/100K reads) and it may be complicated if users are in different timezones and it may be irrelevant regarding most users depending on they're uploading that frequently. Furthermore, rules have limits

  • Rules must be smaller than 64 KiB of UTF-8 encoded text when serialized
  • A project can have at most 2500 total deployed rulesets. Once this limit is reached, you must delete some old rulesets before creating new ones.

So per user rule for a large user base can easily reach this limit (apart from other rules).

Perhaps the optimum solution could be to use Auth Claims. Originally have a deny write rule if user has a particular auth claim token (say canUpload: false). Then in cloud function triggered on upload, attach this claim when the user reached limit. This will be real-time as it immediately blocks the user as oppose to Admin SDK rules deployment delay.

To remove the auth claim:

  1. Check through another cloud function in the upload error handler if the lastUploadedOn has been changed hence removing the claim
  2. Check through a separate cloud function called before upload that checks if the user has auth claim and the lastUploadedOn is an earlier date, then remove the claim
  3. Additionally, during login, it can be checked and removed if lastUploadedOn is earlier than today but it is less efficient than 2 since it would constitute unnecessary and needless read on firestore while the user is not even uploading anything

In 2, if the client tries to skip the call, and has the auth claim, s/he cannot upload ever as blocked by security rule. Otherwise if no auth claim then s/he will go through the normal process.

Note: Changing auth claims needs to be pushed to the client. See this doc.