2
votes

I have a six node cluster running cassandra 2.1.6. Yesterday I tried to drop a column family and received the message "Column family ID mismatch".

I tried running nodetool repair but after repair was complete I got the same message. I then tried selecting from the column family but got the message "Column family not found".

I ran the following query to get a list of all column families in my schema
select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'xxx';
At this point I received the message "Keyspace 'system' not found."

I tried the command describe keyspaces and sure enough system was not in the list of keyspaces.

I then tried nodetool resetlocalshema on one of the nodes missing the system keyspace and when that failed to resolve the problem I tried nodetool rebuild but got the same messages after rebuild was complete.

I tried stopping the nodes missing the system keyspace and restarted them, once the restart was completed the system keyspace was back and I was able to execute the above query successfully. However, the table I had tried to drop previously was not listed so I tried to recreate it and once again received the message Column family ID mismatch.

Finally, I shutdown the cluster and restarted it... and everything works as expected.

My questions are:
How/why did the system keyspace disappear?
What happened to the data being inserted into my column families while the system keyspace was missing from two of the six nodes? (my application didn't seem to have any problems)
Is there a way I can detect problems like this automatically or do I have to manually check up on my keyspaces each day?
Is there a way to fix the missing system keyspace and/or the Column family ID mismatch without restarting the entire cluster?

EDIT
As per Jim Meyers suggestion I queried the cf_id on each node of the cluster and confirmed that all nodes return the same value.

select cf_id from system.schema_columnfamilies where columnfamily_name = 'customer' allow filtering;

cf_id
--------------------------------------
cbb51b40-2b75-11e5-a578-798867d9971f

I then ran ls on my data directory and can see that there are multiple entries for a few of my tables
customer-72bc62d0ff7611e4a5b53386c3f1c9f9
customer-cbb51b402b7511e5a578798867d9971f

My application dynamically creates tables at run time (always using IF NOT EXISTS), seems likely that the application issued the same create table command on separate nodes at the same time resulting in the schema mismatch. Since I've restarted the cluster everything seems to be working fine.

Is it safe to delete the extra file?
i.e. customer-72bc62d0ff7611e4a5b53386c3f1c9f9

2
please post ls in your data directory on this node.phact
I thought it might. Will post the action you need to take to fix.phact

2 Answers

3
votes

1 The cause of this problem is a CREATE TABLE statement collision. Do not generate tables dynamically from multiple clients, even with IF NOT EXISTS. First thing you need to do is fix your code so that this does not happen. Just create your tables manually from cqlsh allowing time for the schema to settle. Always wait for schema agreement when modifying schema.

2 Here's the fix:

1) Change your code to not automatically re-create tables (even with IF NOT EXISTS).

2) Run a rolling restart to ensure schema matches across nodes. Run nodetool describecluster around your cluster. Check that there is only one schema version. 

ON EACH NODE:

3) Check your filesystem and see if you have two directories for the table in question in the data directory.

If THERE ARE TWO OR MORE DIRECTORIES:

4)Identify from schema_column_families which cf ID is the "new" one (currently in use). 

cqlsh -e "select * from system.schema_column_families"|grep

5) Move the data from the "old" one to the "new" one and remove the old directory. 

6) If there are multiple "old" ones repeat 5 for every "old" directory.

7) run nodetool refresh

IF THERE IS ONLY ONE DIRECTORY:

No further action is needed.

Futures

Schema collisions will continue to be an issue until - CASSANDRA-9424

Here's an example of it occurring on Jira and closed as not a problem CASSANDRA-8387

2
votes

When you create a table in Cassandra it is assigned a unique id that should be the same on all nodes. Somehow it sounds like your table did not have the same id on all nodes. I'm not sure how that might happen, but maybe there was a glitch when the table was created and it was created multiple times, etc.

You should always use the IF NOT EXISTS clause when creating tables.

To check if your id's are consistent, try this on each node:

  1. In cqlsh, run "SELECT cf_id from system.schema_columnfamilies where columnfamily_name ='yourtablename' allow filtering;

  2. Look in the data directory under the keyspace name the table was created in. You should see a single directory for the table that looks like table_name-cf_id.

If things are correct you should see the same cf_id in all these places. If you see different ones, then somehow things got out of sync.

The other symptoms like the system keyspace disappearing I don't have a suggestion other than you hit some kind of bug in the software. If you get a lot of strange symptoms like this then perhaps you have some kind of data corruption. You might want to think about backing up your data in case things go south and you need to rebuild the cluster.