16
votes

I have problems to get a existing Cassandra node to join the cluster again after reboot (on a new virtual machine instance).

I had a running Cassandra cluster with 4 nodes all in state "up and normal" according to nodetool status. The nodes are running on virtual machines in Azure. I changed the instance type of the virtual machine for 10.0.0.6, which returned in a reboot of this machine. The machine stayed on 10.0.0.6. After the reboot I am unable to start Cassandra again. I am getting this exception:

INFO  22:39:07 Handshaking version with /10.0.0.4
INFO  22:39:07 Node /10.0.0.6 is now part of the cluster
INFO  22:39:07 Node /10.0.0.5 is now part of the cluster
INFO  22:39:07 Handshaking version with cassandraprd001/10.0.0.6
INFO  22:39:07 Node /10.0.0.9 is now part of the cluster
INFO  22:39:07 Handshaking version with /10.0.0.5
INFO  22:39:07 Node /10.0.0.4 is now part of the cluster
INFO  22:39:07 InetAddress /10.0.0.6 is now UP
INFO  22:39:07 Handshaking version with /10.0.0.9
INFO  22:39:07 InetAddress /10.0.0.4 is now UP
INFO  22:39:07 InetAddress /10.0.0.9 is now UP
INFO  22:39:07 InetAddress /10.0.0.5 is now UP
ERROR 22:39:08 Exception encountered during startup
java.lang.RuntimeException: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
    at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:455) ~[apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:667) ~[apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.StorageService.initServer(StorageService.java:615) ~[apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.StorageService.initServer(StorageService.java:509) ~[apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) [apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [apache-cassandra-2.1.0.jar:2.1.0]
    at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [apache-cassandra-2.1.0.jar:2.1.0]
java.lang.RuntimeException: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
    at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:455)
    at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:667)
    at org.apache.cassandra.service.StorageService.initServer(StorageService.java:615)
    at org.apache.cassandra.service.StorageService.initServer(StorageService.java:509)
    at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338)
    at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
    at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546)
Exception encountered during startup: A node with address cassandraprd001/10.0.0.6 already exists, cancelling join. Use cassandra.replace_address if you want to replace this node.
INFO  22:39:08 Announcing shutdown

I am using Cassandra 2.1.0. I am not replaying a dead node - I am just trying to get the old node up and running again. According to nodetool status (on the other nodes) all nodes are "up and normal" except 10.0.0.6 which is "down and normal".

How do I get this node up and running again?

3
When you modified and rebooted the system, did it remove the existing data? Notably the system keyspace data? If so, you'll have to use -Dreplace_address, even though it's the same node. - Jeff Jirsa

3 Answers

20
votes

First, on another node, use

nodetool status

the results show you list of nodes in the cluster. Find your node with ip which fail to start, get its id and fill to command:

nodetool removenode <node_id>

then start cassandra.

Best,

6
votes

Quick answer, if the node's ip is 10.200.10.200

add this

JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=10.200.10.200"

to the end of your

cassandra-env.sh

Don't forget to remove it once your done.

3
votes

You can look this blog, http://blog.alteroot.org/articles/2014-03-12/replace-a-dead-node-in-cassandra.html.

It works for me, this is a bug for Cassandra. If your node's host_id changed, but use old IP, will throw this exception.

If you use Cassandra 2.x.x, you should modify cassandra/conf/cassandra-env.sh.

Finally, don't forget to REMOVE modifications on cassandra-env.sh after the complete bootstrap!