Situation: I have set up Hive, Hue and Hadoop in different Docker Containers. In the same Docker network. I created a Container for each 1 Hadoop Namenode, 2 Datanodes,1 Hue instance, 1 Hive Server and a Postgres Metastore. I was able to configure a hue proxy user in the hdfs-site.xml of the namenode and can browse the filesystem via webhdfs. For Hive however I get the error within Hue:
Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException
(org.apache.hadoop.security.authorize.AuthorizationException):
User: root is not allowed to impersonate hue
I am able to create Hive tables in Hive or write to them from within Sparkjobs for example.
What I've tried so far:
I've tried adding properties like
- hive.server2.proxy.user = hue
- hive.server2.enable.impersonation=true
- hadoop.proxyuser.hue.hosts=*
- hive.server2.authentication=NONE
in different configuration files like:
- core-site.xml in hdfs-namenode configuration folder
- core-site.xml in hive-hadoop folder
- hdfs-site.xml in both
- hive-site.xml in hive-conf folder
Most of this was suggested in similar questions but it does not seem up to-date anymore. For some properties Hive says: Property unknown
What I need clarification on:
- What is the right file to add the configuration to ?
- What is the right property to add ?
- Do I have to add some configuration to hue regarding the metastore ?
Additional Information: - Hive Version: 2.3.1 - Hive Hadoop Version: 2.7.4 - Hadoop Cluster Version: 2.7.2 (I think version difference should not be the problem here ?!)
Hue Version: 4 (gethue/hue:latest form docker-hub)