1
votes

We have ranger policies defined on hive table and authorization works as expected when we use hive cli and beeline. But when we access those hive tables using spark-shell or spark-submit it does not work.

Is there any way to set it up?

Problem Statement: Ranger secures Hive (JDBC) server only. But Spark does not interact with HS2, but directly interacts with Metastore. Hence, the only way to use Ranger policies if you use Hive via JDBC. Another option is HDFS or Storage ACLs, which are coarse grain control over file path etc. You can use Ranger to manage HDFS ACLs as well. In such scenario spark will be bound by those policies. But, if I use Ranger to manage HDFS ACLS, as you mentioned it will coarse grain control over file. I might have few fine grained use cases at row/column level

1
#rikamamanus Please reply to the updated question - Joyan

1 Answers

0
votes

Check for ranger audits in ranger ui and check for the denied results for those tables, verify the user.