1
votes

I'm working on a digital asset management deployment and it's likely we'll need need more than the 10TB storage maximum for a persistent disk. The DAM software does not support multiple storage points and likely won't in the future.

How are people coping with this limitation? Is the 10TB max likely to increase as Google compute engine matures?

2
There is a quota increase request form. Is 10TB the absolute maximum? Or is it possible to request a limit beyond that?Corey Riggle
Note that you can now create persistent disks of up to 64TB (I deleted my earlier comment as it's not longer applicable). See my answer below for blog post link with more details.Misha Brukman

2 Answers

1
votes

As of 1 Feb 2016, you can create a persistent disk of up to 64 TB; see the blog post for more details.

0
votes

According to the official doc:

Instances with shared-core machine types are limited to a maximum of 16 persistent disks.

For custom machine types or predefined machine types that have a minimum of 1 vCPU, you can attach up to 128 persistent disks.

Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.

Most instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the boot persistent disk.