Is there a way in Apache Spark to create a Spark SQL proxy table that simply proxies to an underlying (custom) data source?
I have a custom data source that supports predicate pushdown by implementing org.apache.spark.sql.sources.PrunedFilteredScan and now I would like to use Spark SQL against that data source where filter predicates are passed through (pushed down) to the data source. Registering the data source as an ordinary temporary table (using sqlContext.read.format("mydatasource").load().createOrReplaceTempView("myTable")) is not an option as this will ultimately pull all data into Spark.