5
votes

I'm getting this error when reading from a table in a 5 node cluster using datastax drivers.

2015-02-19 03:24:09,908 ERROR [akka.actor.default-dispatcher-9] OneForOneStrategy akka://user/HealthServiceChecker-49e686b9-e189-48e3-9aeb-a574c875a8ab Can't use this Cluster instance because it was previously closed java.lang.IllegalStateException: Can't use this Cluster instance because it was previously closed at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1128) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.init(Cluster.java:149) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.connect(Cluster.java:225) ~[cassandra-driver-core-2.0.4.jar:na] at com.datastax.driver.core.Cluster.connect(Cluster.java:258) ~[cassandra-driver-core-2.0.4.jar:na]

I am able to connect using cqlsh and perform read operations.

Any clue what could be the problem here?

settings:

  • Consistency Level: ONE
  • keyspace replication strategy: 'class': 'NetworkTopologyStrategy', 'DC2': '1', 'DC1': '1'

  • cassandra version: 2.0.6

The code managing cassandra sessions is central and it is;

trait ConfigCassandraCluster
  extends CassandraCluster
{
  def cassandraConf: CassandraConfig
  lazy val port = cassandraConf.port
  lazy val host = cassandraConf.host
  lazy val cluster: Cluster =
    Cluster.builder()
      .addContactPoints(host)
      .withReconnectionPolicy(new ExponentialReconnectionPolicy(100, 30000))
      .withPort(port)
      .withSocketOptions(new SocketOptions().setKeepAlive(true))
      .build()

  lazy val keyspace = cassandraConf.keyspace
  private lazy val casSession = cluster.connect(keyspace)
  val session = new SessionProvider(casSession)
}

class SessionProvider(casSession: => Session) extends Logging {
  var lastSuccessful: Long = 0
  var firstSuccessful: Long = -1
  def apply[T](fn: Session => T): T = {
    val result = retry(fn, 15)
    if(firstSuccessful < 0)
      firstSuccessful = System.currentTimeMillis()
    lastSuccessful = System.currentTimeMillis()
    result
  }

  private def retry[T](fn: Session => T, remainingAttempts: Int): T = {
    //retry logic
}
2
Your code has some problem... we can not magically know the cause without seeing the code. You are closing the cluster connection somewhere... and are trying to query from a closed connection. - sarveshseri
Thanks for the reply. I forgot the mention that the code works elsewhere except this Cassandra configuration. I can confirm the code doesn't have any closing logic anywhere and sessions are managed centrally including retries in case of failures. (code above) - Kasun Kumara

2 Answers

5
votes

The problem is, cluster.connect(keyspace) will close the cluster itself if it experiences NoHostAvailableException. Due to that during retry logic, you are experiencing IllegalStateException.

Have a look at Cluster init() method and you will understand more.

The solution for your problem would be, in the retry logic, do Cluster.builder.addContactPoint(node).build.connect(keyspace). This will enable to have a new cluster object while you retry.

0
votes

Search your code for session.close().

You are closing your connection somewhere as stated in the comments. Once a session is closed, it can't be used again. Instead of closing connections, pool them to allow for re-use.