1
votes

I have an hbase hosting around 80G of data(around 110 million rows). Each row has variable number of columns. I want to primarily use this hbase for key lookups(around 10 million lookups) while minimizing the time. What is the best way to do this? Is the stargate rest interface capable of handling large number of connections?

1

1 Answers

0
votes

Since 10 million lookup is high maybe using memcache or redis will be a better choice. Because 80gb is enough size yo keep in memory.

If you insist on hbase then bloom filter will help you. Also using connection pool and parallel threads will be helpful for performance improvement.