1
votes

I have installed Hadoop in linux cluster. When I try to start the server by the command $bin/start-all.sh, I get following errors:

mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
/home/spuri2/spring_2012/Hadoop/hadoop/hadoop-1.0.2/bin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-spuri2-namenode.pid: Permission denied
head: cannot open `/var/log/hadoop/spuri2/hadoop-spuri2-namenode-gpu02.cluster.out' for reading: No such file or directory
localhost: /home/spuri2/.bashrc: line 10: /act/Modules/3.2.6/init/bash: No such file or directory
localhost: mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
localhost: chown: cannot access `/var/log/hadoop/spuri2': No such file or directory

I have configured log directory parameter in conf/hadoop-env.sh to a /tmp directory and also I have configured the "hadoop.tmp.dir" in core-site.xml to /tmp/ directory. Since I do not have access to /var/log directory but still hadoop daemons are trying to write to /var/log directory and failing.

I am wondering why this is happening?

3

3 Answers

1
votes

You have to write this directory in "core.site.xml" file not in hadoop-env.sh

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/Directory_hadoop_user_have_permission/temp/${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>
1
votes

In short, I faced this problem because there were multiple installations of hadoop in the university cluster. The installation of hadoop as a root user was messing my local hadoop installation.

The reason for Hadoop-daemons not starting is because it is not able to write to certain files that have root privilege. I was running Hadoop as a normal user. The problem occurred because our university's system administrator had installed Hadoop as a root user so when I started my local installation of hadoop,the root installation configuration files were getting priority over my local hadoop configuration files. It took a long time to figure this out but after uninstalling hadoop as a root user, the problem got resolved.

0
votes

I used to get the same error, if you have added under the configuration tag then before running change to superuser : su - username (This is the user who has the ownership of the hadoop directory) then try executing start-all.sh

Make sure you have added the necessary in between configuration tags as mentioned in tutorial :

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/