I've been working on an application which requires regular writes and massive reads at once.
The application is storing a few text columns which are not very big in size and a map of which is the biggest column in the table.
Working with Phantom-DSL in Scala (Datastax Java driver underneath), my application crashes when the data size increases.
Here is a log from my application.
[error] - com.websudos.phantom - All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.OperationTimedOutException: [/127.0.0.1:9042] Operation timed out))
And here are the cassandra logs.
I have posted Cassandra logs in a pastebin because they were pretty large to be embedded in the answers.
I can't seem to understand the reason for this crash. I have tried increasing the timeout and turning off row cache.
From what I understand, this is a basic problem and can be resolved by tuning cassandra for this special case.
My cassandra usage is coming from different data sources. So writes are not very frequent. But reads are big in size in that over 300K rows may be required at once which then need to be transferred over HTTP.