1
votes

We have about 10 millions of small files - blobs, most of them around 1-2kB, some around 100kB and a very few above 1MB, total size 50GB. They have unique integer id - key from MySQL database. They will be always accessed by the key, never searched. Access time for one blob should be to 50ms. But we want to archive them occasionally, or do some bulk deletes or updates. I need to choose the proper storage option in the Google Cloud:

  • Storage - natural choice for blobs. But cannot do well archive and restore large number of objects - no bulk download/upload operations.
  • Datastore - is suitable as a blob store? But has limit 1MB per entity.
  • CloudSQL - is suitable as blob store?
  • BigTable - too expensive :-)
  • Custom file server - NFS, Gluster, ...
  • Custom NoSQL? Also an option, but we would prefer hosted Google solution.
  • Other?
1

1 Answers

1
votes

You are going to be slightly limited on Google Cloud Platform options based on your requirements.

The ideal approach would be to use data store but yes they have size limitations for entities. CloudSQL is an option however it is really designed for transactions and therefore comes with a larger cost of running it.

I would therefore say - you would need to either use CloudSQL or have your own storage setup on an instance until they look to increase limits.