0
votes

I found this GitHub reference to measure the write performance of Bigtable- https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/java/simple-performance-test

As per the official documentation, we expect the write performance to match up to 10K/sec for a Bigtable instance having a single node and SSD storage. However, On an average I'm getting 35 QPS of write performance for the same configuration. Is it unusual?

I'm running my benchmarking on 1 million rows (1 KB per row). Modified the source code as well to generate 1 million different values as originally this code generates a single value and feeds the same to Bigtable for each row. Please note that monitoring console never shows anything > 15 QPS. Any specific reason of this variance between what I see on console and what I see while executing the performance testing utility?

This Stack Overview reference suggests that the performance I see mightn't be unusual- Google Bigtable performance: QPS vs CPU utilization

Is there any other way or utility which can help me benchmarking Bigtable write, read and scan performance?

1

1 Answers

1
votes

Cloud Bigtable performance is highly dependent on workload, schema design, and dataset characteristics. The performance numbers shown in this documentation page are estimates only.

I recommend you to read this full documentation which covers the causes of slower performance, testing recommendations and a troubleshooting section for performance issues.

In addition you can use the Cloud Bigtable loadtest tool, written in Go, as a starting point for developing your own performance test.