3
votes

We are using JanusGraph 0.2.0 with Cassandra 3.11.1 and testing its support for geo diversity. Currently we have 2 Datacenters with 1 nodes in each and replication factor is 1 for both the datacenter.

janusgraph-cassandra.properties

storage.backend=cql
storage.cql.read-consistency-level=LOCAL_QUORUM
storage.cql.write-consistency-level=LOCAL_QUORUM
storage.cql.local-datacenter=dc2
storage.cql.only-use-local-consistency-for-system-operations=true
storage.cql.replication-strategy-options=dc1,1,dc2,1
storage.cql.replication-strategy-class = NetworkTopologyStrategy

When we have Cassandra running in both the nodes of datacenter, we are able to connect and create JanusGraph keyspace. However, when one datacenter goes down, and if we try to open a connection, we observe following exception:

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.249.55.111:9042 (com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency QUORUM (2 required but only 1 alive)))
        at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)

We have configured Local_Quorum, but still why is it using Quorum for establishing connection.

Update: nodetool status output

Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns (effective)  Host ID                               Rack
DN  10.249.55.108  283.54 KiB  256          100.0%            619242db-f0bd-4492-aeb6-2bb0ebfe4733  rack1
Datacenter: dc2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.249.55.111  294.75 KiB  256          100.0%            6ebe897e-94e3-44e5-99dc-055beb633e74  rack1
1
Can you paste the output of "nodetool status"dilsingi
Thanks for looking into it, i have added nodetool status output.satlearner

1 Answers

7
votes

This appears to be a bug in the JanusGraph code. I've opened an issue to track this. In the meantime, adding this line to your janusgraph-cassandra.properties is a workaround:

log.tx.key-consistent=true