I am wondering how the Kafka partitions are shared among the SimpleConsumer being run from inside the executor processes. I know how the high level Kafka consumers are sharing the parititions across differernt consumers in the consumer group. But how does that happen when Spark is using the Simple consumer ? There will be multiple executors for the streaming jobs across machines.