6
votes

We are running a Titan Graph DB server backed by Cassandra as a persistent store and are running into an issue with reaching the limit on Cassandra tombstone thresholds that is causing our queries to fail / timeout periodically as data accumulates. It seems like the compaction is unable to keep up with the number of tombstones being added.

Our use case supports:

  1. High read / write throughputs.
  2. High sensitivity to reads.
  3. Frequent updates to node values in Titan. causing rows to be updated in Cassandra.

Given the above use cases, we are already optimizing Cassandra to aggressively do the following:

  1. Aggressive compaction by using the levelled compaction strategies
  2. Using tombstone_compaction_interval as 60 seconds.
  3. Using tombstone_threshold to be 0.01
  4. Setting gc_grace_seconds to be 1800

Despite the following optimizations, we are still seeing warnings in the Cassandra logs similar to: [WARN] (ReadStage:7510) org.apache.cassandra.db.filter.SliceQueryFilter: Read 0 live and 10350 tombstoned cells in .graphindex (see tombstone_warn_threshold). 8001 columns was requested, slices=[00-ff], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}

Occasionally as time progresses, we also see the failure threshold breached and causes errors.

Our cassandra.yaml file has the tombstone_warn_threshold to be 10000, and the tombstone_failure_threshold to be much higher than recommended at 250000, with no real noticeable benefits.

Any help that can point us to the correct configurations would be greatly appreciated if there is room for further optimizations. Thanks in advance for your time and help.

4
Are you frequently deleting data? It is my understanding that tombstones are not created unless data is explicitly deleted or expires. - Andy Tolbert
Our belief is that Titan GraphDb which handles all our interactions with Cassandra internally might be doing deletes and new creates for every update, which is adding to the number of deletes. - Rohit
That would be good to confirm if that were the case. Could you enable probabilistic tracing (datastax.com/documentation/cassandra/2.0/cassandra/tools/…) on one of your cassandra nodes to see what the deletes are? Another possibility is that columns are being expired (set with a TTL), do you think that could be happening here as well? - Andy Tolbert
I will try this today. Thanks again for the pointers. - Rohit
@Rohit came across this post today. Should help you understand when tombstones are created. groups.google.com/forum/#!msg/aureliusgraphs/XMG7DKkAll0/… - Curtis Allen

4 Answers

7
votes

Sounds like the root of your problem is your data model. You've done everything you can do to mitigate getting TombstoneOverwhelmingException. Since your data model requires such frequent updates causing tombstone creation a eventual consistent store like Cassandra may not be a good fit for your use case. When we've experience these types of issues we had to change our data model to fit better with Cassandra strengths.

About deletes http://www.slideshare.net/planetcassandra/8-axel-liljencrantz-23204252 (slides 34-39)

6
votes

Tombstones are not compacted away until the gc_grace_seconds configuration on a table has elapsed for a given tombstone. So even increasing your compaction interval your tombstones will not be removed until gc_grace_seconds has elapsed, with the default being 10 days. You could try tuning gc_grace_seconds down to a lower value and do repairs more frequently (usually you want to schedule repairs to happen every gc_grace_seconds_in_days - 1 days).

2
votes

So everyone here is right. If you repair and compact frequently you an reduce your gc_grace_seconds number.

It may also however be worth considering that Inserting Nulls is equivalent to a delete. This will increase your number of tombstones. Instead you'll want to insert the UNSET_VALUE if you're using prepared statements. Probably too late for you, but if anyone else comes here.

1
votes

The variables you've tuned are helping you expire tombstones, but it's worth noting that while tombstones can not be purged until gc_grace_seconds, Cassandra makes no guarantees that tombstones WILL be purged at gc_grace_seconds. Indeed, tombstones are not compacted until the sstable containing the tombstone is compacted, and even then, it will not be eliminated if there is another sstable containing a cell that is shadowed.

This results in tombstones potentially persisting a very long time, especially if you're using sstables that are infrequently compacted (say, very large STCS sstables). To address this, tools exist such as the JMX endpoint to forceUserDefinedCompaction - if you're not adept at using JMX endpoints, tools to do this for you automatically exist such as http://www.encql.com/purge-cassandra-tombstones/