1
votes

I have a cluster across two zones in AWS and This is functional and have not issues with it.

It is the Second DC that seems to have issues as it comes up but does not own any data or have any replicated to it. This cluster will be used for reporting and no client doing inserts will run against it.

The following is the schema_keyspaces config:

====

keyspace_name | durable_writes | strategy_class | strategy_options

app | True | org.apache.cassandra.locator.NetworkTopologyStrategy | {"1":"3"}

====

Cassandra Version 1.2.14

initial_token:

num_tokens: 256

Datacenter: 1

==============

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

-- Address Load Tokens Owns (effective) Host ID Rack

UN 10.1.1.22 122.7 MB 256 37.5% 12ae63ef-3757-4b42-872e-bdd26d0dea50 1

UN 10.1.1.23 133.07 MB 256 37.7% 05771659-b117-4c7a-9a71-c5fd0dc976c1 1

UN 10.1.1.20 131.4 MB 256 40.3% 5310a111-0954-4d2b-aeff-eb2d3150faff 1

UN 10.1.1.21 129.81 MB 256 36.7% 3e94252d-19cd-4334-918e-f4df980a452a 1

UN 10.1.2.20 110.57 MB 256 33.8% 9bf87f06-0617-4ace-abf8-fa418c05a0eb 2

UN 10.1.2.21 132.05 MB 256 37.4% 6b89460e-74f3-4b96-8363-d3fe5c413f48 2

UN 10.1.2.22 125.58 MB 256 38.4% ae11a4f2-9956-4b26-8e3d-16425c76f916 2

UN 10.1.2.23 124.12 MB 256 38.1% 53c93e96-c490-4d98-bb89-cbdc71e0346d 2

Datacenter: 2

==============

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

-- Address Load Tokens Owns (effective) Host ID Rack

UN 10.2.2.22 187.4 KB 256 0.0% 95caff71-4dca-4e57-97e3-5985e79cd746 2

UN 10.2.2.23 187.33 KB 256 0.0% 91b44d9e-5aab-44b2-8135-670addc2a339 2

UN 10.2.2.20 187.33 KB 256 0.0% 0e5bd1d1-69f8-4851-bfad-e1068835ddd8 2

UN 10.2.2.21 137.93 KB 256 0.0% 29e6aa79-1145-4308-8248-44280ff9f4ad 2

UN 10.2.2.27 163.65 KB 256 0.0% f471466b-4f6c-4450-8b7e-1225056265f2 2

UN 10.2.2.24 138.03 KB 256 0.0% 07ba8dd1-9bc3-4d51-a903-2853ed09e008 2

UN 10.2.2.25 187.79 KB 256 0.0% 7fa542c5-f85c-4a4a-b641-d7a310710701 2

As you can see it the second DC does not seem to receive data but I can query the keyspace that is on the first cluster from the second cluster. Why is the data not replicating.

Note the config at the top of this post was used to add nodes to the first cluster and these join the cluster and data balances across without really doing anything else.

2

2 Answers

1
votes

It appears you have defined datacenter 1 to get 3 replicas, but nothing is defined for datacenter 2

NetworkTopologyStrategy | {"1":"3"}

You'll need similar to:

NetworkTopologyStrategy | {"1":"3", "2":"3"}
0
votes

So after a bit of fiddling about I eventually used the following:

ALTER KEYSPACE myapp WITH replication = {'class': 'NetworkTopologyStrategy', 1 : 3, 2 : 3 } AND durable_writes = 'true';

Then using cluster SSH I connected to all 16 cassandra nodes and did a repair on the keyspace. This forced out the merkel tree check and balancing of data.

So happy days.