1
votes

I've added one node to a Cassandra cluster of 5 and the new node joined without any errors . However, if I enter nodetool status on the same node, 2 nodes of the cluster are missing:

cassandra@cassandra-n8:~$ nodetool status
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load       Owns (effective)  Host ID                               Token                                    Rack
UN  10.140.118.4    13.22 GB   47.3%             aed1d856-316a-4ec9-8858-27ccf87e42da  -9153208255983624823                     rack1
UN  10.53.186.53    30.48 GB   50.9%             80cb0036-33b9-4c37-b789-7dac340034ee  -9137279293977023905                     rack1
UN  10.53.170.3     26.93 GB   51.5%             737f49e5-684f-46ef-bf8b-c82326128835  -9106630210265624873                     rack1
UN  10.140.104.105  30 GB      50.3%             18c74472-235d-4284-9906-0ab8cc40011d  -9213643688261125087                     rack1
cassandra@cassandra-n8:~$

nodetool gossipinfo shows all 6 nodes right:

cassandra@cassandra-n8:~$ nodetool gossipinfo
/10.140.120.27
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.140.120.27
  LOAD:2.69146405E10
  SEVERITY:2.0100502967834473
  HOST_ID:2564094b-08ea-42c4-82b0-a8246bd3ebcf
  RELEASE_VERSION:2.0.7.31
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra
/10.53.186.53
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.53.186.53
  LOAD:3.2724062932E10
  SEVERITY:0.0
  HOST_ID:80cb0036-33b9-4c37-b789-7dac340034ee
  RELEASE_VERSION:2.0.7.31
  STATUS:NORMAL,-1090066755942681373
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra
/10.53.170.41
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.53.170.41
  LOAD:2.8562198657E10
  SEVERITY:2.0100502967834473
  HOST_ID:866d2276-0dac-41b3-aece-6a2711ef0234
  RELEASE_VERSION:2.0.7.31
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra
cassandra-n8/10.140.118.4
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.140.118.4
  LOAD:1.4191857026E10
  SEVERITY:1.0362694263458252
  HOST_ID:aed1d856-316a-4ec9-8858-27ccf87e42da
  RELEASE_VERSION:2.0.7.31
  STATUS:NORMAL,-1073073255063738723
  NET_VERSION:7
  DC:Cassandra
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
/10.140.104.105
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.140.104.105
  LOAD:3.2209705168E10
  SEVERITY:0.0
  HOST_ID:18c74472-235d-4284-9906-0ab8cc40011d
  RELEASE_VERSION:2.0.7.31
  STATUS:NORMAL,-1088894055925784152
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra
/10.53.170.3
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.53.170.3
  LOAD:2.8916136103E10
  SEVERITY:0.0
  HOST_ID:737f49e5-684f-46ef-bf8b-c82326128835
  RELEASE_VERSION:2.0.7.31
  STATUS:NORMAL,-1099238535843317980
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra

So far I restarted the node and could see that all handshakes were done right. But nodetool status is still failing. Am I missing something?

1

1 Answers

2
votes

From your post, two of the nodes do not have a STATUS.

cassandra@cassandra-n8:~$ nodetool gossipinfo
/10.140.120.27
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.140.120.27
  LOAD:2.69146405E10
  SEVERITY:2.0100502967834473
  HOST_ID:2564094b-08ea-42c4-82b0-a8246bd3ebcf
  RELEASE_VERSION:2.0.7.31
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra

While this node has a STATUS of NORMAL.

/10.53.186.53
  X_11_PADDING:{"workload":"Cassandra","active":"true"}
  RACK:rack1
  RPC_ADDRESS:10.53.186.53
  LOAD:3.2724062932E10
  SEVERITY:0.0
  HOST_ID:80cb0036-33b9-4c37-b789-7dac340034ee
  RELEASE_VERSION:2.0.7.31
  STATUS:NORMAL,-1090066755942681373
  NET_VERSION:7
  SCHEMA:17b20010-00c2-3035-94d2-ed9448b4190a
  DC:Cassandra

I would have a look at the startup logs to see what is happening, and whether you can access any of the other nodes independently.