I've been running a database application that writes data synchronously to disk, and so, looking for the best disk throughput. I've found that GCP's local SSDs are supposed to provide the best performance (both for IOPS and MB/s). However, I've tried using them and found that when performing a benchmark of synchronous database writes, the throughput achieved by a persistent zonal SSD is significantly better than that of the local SSD. Strangely the use of a single local SSD results in better performance than a RAID configuration with 4 partitions.
To test the performance I ran a benchmark consisting of a single thread creating transactions in a loop and performing a random 4KB write.
The persistent zonal SSD was 128GB, while the local SSD consists of 4 SSDs in RAID 0. An N2D machine with 32 vCPUs was used to eliminate CPU bottleneck. To ensure it wasn't a problem the with OS or filesystem, I've tried various different versions, including the ones recommended by Google. However, the result is always the same regardless.
The results for my experiments on average are:
SSD | Latency | Throughput |
---|---|---|
Zonal P SSD (128 GB) | ~1.5ms | ~700 writes/second |
Local SSD (4 SSDs NVME RAID 0) | ~14ms | ~71 writes/second |
Local SSD (1 SSD) | ~13ms | ~75 writes/second |
I'm at a bit of a loss on how to proceed, as I'm not sure if this result should be expected. If so, it seems like my best option is to use zonal persistent disks. Do you think that these results seem correct, or might there be some problem with my setup?
Suggestions of turning of write-caching etc. will improve performance, however, the goal here is to obtain fast performance for synchronous disk writes. Otherwise, my best option would be zonal persistent SSDs (they offer replicated storage) or just using RAM which will always be faster than any SSD.
As AdolfoOG suggested, there might be an issue with my RAID configuration so to shed some light on this, I use the following commands to create my RAID 0 setup with four devices. Note, /dev/nvme0nX refers to each NVMe device I'm using.
sudo mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
sudo mkfs.ext4 -F /dev/md0
sudo mkdir /mnt/disks/
sudo mkdir /mnt/disks/stable-store
sudo mount /dev/md0 /mnt/disks/stable-store
sudo chmod a+w /mnt/disks/stable-store
This should be the same process as what Google advises unless I messed something up of course!