2
votes

I am using [cqlsh 5.0.1 | Cassandra 2.2.1 | CQL spec 3.3.0 | Native protocol v4] version. I have 2 node cassandra cluster with replication factor as 2.

$ nodetool status test_keyspace
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns (effective)  Host ID                         Rack
UN  10.xxx.4.xxx  85.32 GB   256          100.0%            xxxx-xx-xx-xx-xx                rack1
UN  10.xxx.4.xxx  80.99 GB   256          100.0%            x-xx-xx-xx-xx                   rack1

[I have replaced numbers with x]

This is keyspace defination.

cqlsh> describe test_keyspace;

CREATE KEYSPACE test_keyspace WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'}  AND durable_writes = true;

CREATE TABLE test_keyspace.test_table (
    id text PRIMARY KEY,
    listids map<int, timestamp>
) WITH bloom_filter_fp_chance = 0.01
    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX list_index ON test_keyspace.test_table (keys(listids));

id are unique and listids's key has cardinality close to 1000. I have millions records in this keyspace.

I want to get count of records with specific key and also list of those records. I tried this query from cqlsh:

select count(1) from test_table where listids contains key 12;

Got this error after few seconds:

ReadTimeout: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

I have already modified timeout parameters in cqlshrc and cassandra.yaml.

cat /etc/cassandra/conf/cassandra.yaml | grep read_request_timeout_in_ms
#read_request_timeout_in_ms: 5000
read_request_timeout_in_ms: 300000

cat ~/.cassandra/cqlshrc
[connection]
timeout = 36000
request_timeout = 36000
client_timeout = 36000

When i checked /var/log/cassandra/system.log I got only this-

WARN  [SharedPool-Worker-157] 2016-07-25 11:56:22,010 SelectStatement.java:253 - Aggregation query used without partition key

I am using Java client form my code. The java client is also getting lot of read timeouts. One solution might be remodeling my data but that will take more time (Although i am not sure about it). Can someone suggest quick solution of this problem?

Adding stats :

$ nodetool cfstats test_keyspace
Keyspace: test_keyspace
    Read Count: 5928987886
    Read Latency: 3.468279416568199 ms.
    Write Count: 1590771056
    Write Latency: 0.02020026287239664 ms.
    Pending Flushes: 0
        Table (index): test_table.list_index
        SSTable count: 9
        Space used (live): 9664953448
        Space used (total): 9664953448
        Space used by snapshots (total): 4749
        Off heap memory used (total): 1417400
        SSTable Compression Ratio: 0.822577888909709
        Number of keys (estimate): 108
        Memtable cell count: 672265
        Memtable data size: 30854168
        Memtable off heap memory used: 0
        Memtable switch count: 0
        Local read count: 1718274
        Local read latency: 63.356 ms
        Local write count: 1031719451
        Local write latency: 0.015 ms
        Pending flushes: 0
        Bloom filter false positives: 369
        Bloom filter false ratio: 0.00060
        Bloom filter space used: 592
        Bloom filter off heap memory used: 520
        Index summary off heap memory used: 144
        Compression metadata off heap memory used: 1416736
        Compacted partition minimum bytes: 73
        Compacted partition maximum bytes: 2874382626
        Compacted partition mean bytes: 36905317
        Average live cells per slice (last five minutes): 5389.0
        Maximum live cells per slice (last five minutes): 51012
        Average tombstones per slice (last five minutes): 2.0
        Maximum tombstones per slice (last five minutes): 2759

        Table: test_table
        SSTable count: 559
        Space used (live): 62368820540
        Space used (total): 62368820540
        Space used by snapshots (total): 4794
        Off heap memory used (total): 817427277
        SSTable Compression Ratio: 0.4856571513639344
        Number of keys (estimate): 96692796
        Memtable cell count: 2587248
        Memtable data size: 27398085
        Memtable off heap memory used: 0
        Memtable switch count: 558
        Local read count: 5927272991
        Local read latency: 3.788 ms
        Local write count: 559051606
        Local write latency: 0.037 ms
        Pending flushes: 0
        Bloom filter false positives: 4905594
        Bloom filter false ratio: 0.00023
        Bloom filter space used: 612245816
        Bloom filter off heap memory used: 612241344
        Index summary off heap memory used: 196239565
        Compression metadata off heap memory used: 8946368
        Compacted partition minimum bytes: 43
        Compacted partition maximum bytes: 1916
        Compacted partition mean bytes: 173
        Average live cells per slice (last five minutes): 1.0
        Maximum live cells per slice (last five minutes): 1
        Average tombstones per slice (last five minutes): 1.0
        Maximum tombstones per slice (last five minutes): 1
2
I faced the same issue. tried 1) # Can also be set to None to disable:client_timeout = None in cqlshrc in home .cassandra. Did not helped. 2) Increased the timeout *timeout_in_ms in ym.cassandra.yaml Did not helped too. Finally I settled in running loop on select clause in my java code and received count. 12 million rows gave me count in 7 seconds. It is fast.Nilesh Deshpande

2 Answers

0
votes

You can either redesign your tables, or split your query in multiple smaller queries.

You are selecting using a secondary index without using the partition key (thats what the warning tells you). Doing that, you essentially perform a full table scan. Your nodes have to look into every partition in order to fulfill your request.

A solution without changing the datamodel would be, to iterate over all partitions and run your query once per partition.

select count(*) from test_table where id = 'somePartitionId' and listids contains key 12;

This way, your nodes know on which partition you are looking for these information. You would then have to aggregate the results of these query on clientside.

0
votes

I faced the same issue. tried 1) # Can also be set to None to disable:client_timeout = None in cqlshrc in home .cassandra. Did not helped.

2) Increased the timeout *timeout_in_ms in ym.cassandra.yaml

Did not helped too. Finally I settled in running loop on select clause in my java code and received count. 12 million rows gave me count in 7 seconds. It is fast.

Cluster cluster = Cluster.builder()
            .addContactPoints(serverIp)
            .build();

     session = cluster.connect(keyspace);


    String cqlStatement = "SELECT count(*) FROM imadmin.device_appclass_attributes";
    //String cqlStatement = "SELECT * FROM system_schema.keyspaces";
    for (Row row : session.execute(cqlStatement)) {
        System.out.println(row.toString());
    }