I'm new to big data, hadoop and linux. We have a small 4 nodes cluster, 1 master and 3 nodes, running on Ambari 2.1 and Hadoop 2.2.6 All machines are running on Ubuntu Server 12.04. All properly configured and all works well. Including DNS, SSH, NTP etc. However, when I've tried to install HUE 3.8.1 on top of that, following this guide: http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial/ the installation successful, and I'm able to open it in the browser and login. But then it shows me 3 misconfiguration:
- Filesystem root '/' should be owned by hdfs.
- Secret key should be configured as a random string. All sessions will be lost on restart ... but I've never seted up any kerberos or other security.
- The app won't work without a running Livy Spark server.
The folder /home/user/hue and everything in it is owned by the hdfs user and belongs to the hdfs group. When first time loged in into HUE, I've created the ADMIN user. Do I need to add this Admin user to some group, if so, to which one? Also,Spark is installed as a part of Ambari package and up and runing. Do I need to install separately Livy Spark or it is again some configuration ?! So confused now ... I've double checked all config files, and it looks everything OK for me, any help where to look at or even direction where to dig. All steps in configuration guide followed and replaced with correct ports, hosts and addresses. Any ideas what is wrong and how to start HUE ? Thanks in advance.