We are using Datastax spark-cassandra-connector to write to a Cassandra Cluster deployed on a different cluster from spark.
We have observed for bulk loads i.e ~500M records our write runs for (~1 hour), and the read performance goes down during the write is in action. While write performance is pretty good, this is unacceptable in our environment, as some read requests are critical and should be always responded in a specific time frame.
I read an article on SSL Table Loader Use Case, which appears to solve the same issue by using SSLTableLoader(CassandraBulkLoader).
I also read a few SO questions like this one mentioning write can be really slow with SSLTableLoader compared to the spark-cassandra-connector.
Now, What is the underlying reason that makes spark-cassandra-connector faster but cause the low read latency for bulk load? Also, are there any other drawbacks to SSLTableLoader than being slow?