0
votes

New in operating cassandra clusters. Having a 1 DC and 14 node production cluster running @ DCE v.2.1.15.

System log shows many TS warnings like below and are wondering if this is okay or due to the applications natur vs too low (default TS warn level=5000) or if we ought to run manual compactions between our nightly repairs (every node gets repaired once @ week), raise TS warn level...

Hints appreciated!

WARN [SharedPool-Worker-1] 2016-08-18 11:45:02,536 SliceQueryFilter.java:320 - Read 0 live and 6251 tombstone cells in KeyspaceMetadata.CF_RecentIndex for key: 3230303230305febd8fc98e0bf11e 5b870502699f4d249 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-] WARN [SharedPool-Worker-3] 2016-08-18 11:45:02,548 SliceQueryFilter.java:320 - Read 0 live and 6251 tombstone cells in KeyspaceMetadata.CF_MessageFlagsIndex for key: 3230303230305febd8fc98e 0bf11e5b870502699f4d249 (see tombstone_warn_threshold). 1 columns were requested, slices=[1-1:!] WARN [SharedPool-Worker-2] 2016-08-18 11:45:04,566 SliceQueryFilter.java:320 - Read 1 live and 1123 tombstone cells in KeyspaceMetadata.CF_UIDIndex for key: 3230303230315f299f8d3ae0c011e593 3d7775140b84c3 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-] WARN [SharedPool-Worker-2] 2016-08-18 11:45:11,853 SliceQueryFilter.java:320 - Read 0 live and 6251 tombstone cells in KeyspaceMetadata.CF_RecentIndex for key: 3230303230305febd8fc98e0bf11e 5b870502699f4d249 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-] WARN [SharedPool-Worker-2] 2016-08-18 11:45:11,864 SliceQueryFilter.java:320 - Read 0 live and 6251 tombstone cells in KeyspaceMetadata.CF_MessageFlagsIndex for key: 3230303230305febd8fc98e 0bf11e5b870502699f4d249 (see tombstone_warn_threshold). 1 columns were requested, slices=[1-1:!] WARN [SharedPool-Worker-1] 2016-08-18 11:46:09,624 SliceQueryFilter.java:320 - Read 2 live and 2537 tombstone cells in KeyspaceMetadata.CF_TimeIndex for key: 3230303030385ffebcbd200d9411e6b 9750c94c36d1038 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-] WARN [SharedPool-Worker-3] 2016-08-18 11:47:31,434 SliceQueryFilter.java:320 - Read 2 live and 2544 tombstone cells in KeyspaceMetadata.CF_TimeIndex for key: 3230303030345f6b87b24afbe111e5b f7f828e02f15dd6 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-] WARN [SharedPool-Worker-1] 2016-08-18 11:49:13,870 SliceQueryFilter.java:320 - Read 3 live and 2540 tombstone cells in KeyspaceMetadata.CF_TimeIndex for key: 3230303030355f533d997cfbdf11e59 85390948f56b8a7 (see tombstone_warn_threshold). 5000 columns were requested, slices=[-]

1

1 Answers

0
votes

Hai steffen please go through your opscenter and see how many tables have Ts more than 5000 if the no.of tables are less, running manually compactions on those particular tables will be a good solution , as you mentioned you are repairing once a week i would suggest to check the data model of the keyspace why it is causing high no.of TS. If you are not using Opscenter you can check no.of tombstones
sstable2json full_path | grep \"t\"| wc -l