I have setup spark and cassandra cluster and am using cassandra connector in my spark jobs. Now to run my jobs I use spark.cassandra.connection.host and pass one of the ip address of the seed node in one data centre. I was going through the connector site and its states like
"The initial contact node given in spark.cassandra.connection.host can be any node of the cluster. The driver will fetch the cluster topology from the contact node and will always try to connect to the closest node in the same data center. If possible, connections are established to the same node the task is running on."
My query is what happens if the contact node is down. Spark will not be able to getthe cluster topology and hence will not work. I also used nodejs connector for cassandra and there we provide an array of contact points. Is it possible in spark cassandra connector