0
votes

I am trying to query cassandra table through Thrift server. I have setup my spark cluster having one master and one worker in the same node.

I am starting thrift server with following command without having any custom configuration.

$SPARK_HOME/sbin/start-thriftserver.sh --packages com.datastax.spark:spark-cassandra-connector_2.11:2.0.2 --conf spark.cassandra.connection.host=127.0.0.1 --master spark://<spark-master>:7077

I have created following table in cassandra and inserted not more than 10 records in it and configured in hive metastore.

CREATE TABLE IF NOT EXISTS places_for_research(
    research_id uuid,
    tenant_id uuid,
    country text,
    place_id uuid,
    PRIMARY KEY((tenant_id,research_id),country,place_id)
);

Now when I query this table from beeline, first time it takes around 19 seconds and on subsequent execution it reduces this time to half second.

Following is the query which I execute from beeline which return 2 records.

select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858';

Wondering what is causing so much time for first request. How can I optimise first request performance?

Following is the thrift server logs for your ref

17/11/03 20:12:50 INFO SparkExecuteStatementOperation: Running query 'select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858'' with 9d9a5c7c-2766-48c3-ab58-348b461b6577
17/11/03 20:12:50 INFO SparkSqlParser: Parsing command: select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858'
17/11/03 20:12:51 INFO HiveMetaStore: 2: get_table : db=default tbl=places_for_research
17/11/03 20:12:51 INFO audit: ugi=anonymous ip=unknown-ip-addr  cmd=get_table : db=default tbl=places_for_research  
17/11/03 20:12:51 INFO HiveMetaStore: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/11/03 20:12:51 INFO ObjectStore: ObjectStore, initialize called
17/11/03 20:12:51 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/11/03 20:12:51 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/11/03 20:12:51 INFO ObjectStore: Initialized ObjectStore
17/11/03 20:12:52 INFO CatalystSqlParser: Parsing command: array<string>
17/11/03 20:12:52 INFO HiveMetaStore: 2: get_table : db=default tbl=places_for_research
17/11/03 20:12:52 INFO audit: ugi=anonymous ip=unknown-ip-addr  cmd=get_table : db=default tbl=places_for_research  
17/11/03 20:12:52 INFO CatalystSqlParser: Parsing command: array<string>
17/11/03 20:12:53 INFO ClockFactory: Using native clock to generate timestamps.
17/11/03 20:12:53 WARN NettyUtil: Found Netty's native epoll transport, but not running on linux-based operating system. Using NIO instead.
17/11/03 20:12:54 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
17/11/03 20:12:54 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
17/11/03 20:12:55 INFO CassandraSourceRelation: Input Predicates: [IsNotNull(tenant_id), IsNotNull(research_id), EqualTo(tenant_id,340276cb-389b-4f57-a2cf-6ff5ec3e4d91), EqualTo(research_id,95dafbe7-78d0-4509-9553-899dfaa7b858)]
17/11/03 20:12:55 INFO CassandraSourceRelation: Input Predicates: [IsNotNull(tenant_id), IsNotNull(research_id), EqualTo(tenant_id,340276cb-389b-4f57-a2cf-6ff5ec3e4d91), EqualTo(research_id,95dafbe7-78d0-4509-9553-899dfaa7b858)]
17/11/03 20:12:57 INFO CodeGenerator: Code generated in 652.925772 ms
17/11/03 20:12:57 INFO SparkContext: Starting job: run at AccessController.java:0
17/11/03 20:12:57 INFO DAGScheduler: Got job 0 (run at AccessController.java:0) with 1 output partitions
17/11/03 20:12:57 INFO DAGScheduler: Final stage: ResultStage 0 (run at AccessController.java:0)
17/11/03 20:12:57 INFO DAGScheduler: Parents of final stage: List()
17/11/03 20:12:57 INFO DAGScheduler: Missing parents: List()
17/11/03 20:12:57 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[6] at run at AccessController.java:0), which has no missing parents
17/11/03 20:12:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 12.8 KB, free 366.3 MB)
17/11/03 20:12:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.3 KB, free 366.3 MB)
17/11/03 20:12:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.110:57001 (size: 6.3 KB, free: 366.3 MB)
17/11/03 20:12:58 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/11/03 20:12:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[6] at run at AccessController.java:0)
17/11/03 20:12:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/11/03 20:12:58 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.110, executor 0, partition 0, ANY, 8403 bytes)
17/11/03 20:13:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.110:57005 (size: 6.3 KB, free: 366.3 MB)
17/11/03 20:13:05 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
17/11/03 20:13:09 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 11709 ms on 192.168.1.110 (executor 0) (1/1)
17/11/03 20:13:09 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/11/03 20:13:10 INFO DAGScheduler: ResultStage 0 (run at AccessController.java:0) finished in 11.734 s
17/11/03 20:13:10 INFO DAGScheduler: Job 0 finished: run at AccessController.java:0, took 12.189787 s
17/11/03 20:13:10 INFO CodeGenerator: Code generated in 63.249603 ms
17/11/03 20:13:10 INFO SparkExecuteStatementOperation: Result Schema: StructType(StructField(tenant_id,StringType,true), StructField(research_id,StringType,true), StructField(country,StringType,true), StructField(place_id,StringType,true))

Thanks.

1

1 Answers

2
votes

The Spark Thrift Server is lazy which means it doesn't actually start any machinery for doing queries until after the first query is launched. The delay you see is the actual starting up and requesting of remote resources. This will always take some non-zero amount of time but you could possibly avoid this by always having your thrift server immediately queried with a dummy request after being started up.