I'm using spark-cassandra-connector_2.11 at version 2.3.0. Running Latest Spark 2.3.0 Trying to Read Data From Cassandra (3.0.11.1485) DSE (5.0.5).
Example Read that Works Without a Problem:
JavaRDD<Customer> result = javaFunctions(sc).cassandraTable(MyKeyspaceName, "customers", mapRowTo(Customer.class));
Another Read that works correctly: If I'm doing from unit test - single thread - single read as follow.
cassandraConnector.withSessionDo(new AbstractFunction1<Session, Void>() {
@Override
public Void apply(Session session) {
//Read something from Cassandra via Session - Works Fine Here as well.
}
});
Example Read (mapPartitions+withSessionDo) Problematic Code:
CassandraConnector cassandraConnector = CassandraConnector.apply(sc.getConf());
SomeSparkRDD.mapPartitions((FlatMapFunction<Iterator<Customer>, CustomerEx>) customerIterator ->
cassandraConnector.withSessionDo(new AbstractFunction1<Session, Iterator<CustomerEx>>() {
@Override
public Iterator<CustomerEx> apply(Session session) {
return asStream(customerIterator, false)
.map(customer -> fetchDataViaSession(customer, session))
.filter(x -> x != null)
.iterator();
}
}));
public static <T> Stream<T> asStream(Iterator<T> sourceIterator, boolean parallel) {
Iterable<T> iterable = () -> sourceIterator;
return StreamSupport.stream(iterable.spliterator(), parallel);
}
Some iterations of: map(customer -> fetchDataViaSession(customer, session)) Works but The Majority Fails with NoHostAvailableException.
Tried various setups with no success:
spark.cassandra.connection.connections_per_executor_max
spark.cassandra.connection.keep_alive_ms
spark.cassandra.input.fetch.size_in_rows
spark.cassandra.input.split.size_in_mb
Also Tried to reduce the number of Partitions of the RDD which I do mapPartitions+withSessionDo on.