I am using Apache spark-sql for querying data from database. I know that Spark shares the same metastore of hive by default. I have partitioned the input data based on column id which have more than 300k distinct values. As of now there are more than 300k partitions for that table and it will increase periodically.
Is there any problem arise due to this?