1
votes

I have installed hadoop on 3 nodes, 1 master and 2 -slave nodes. on master node and one of slave node is having same hadoop path i.e. /home/hduser/hadoop, but in one slave node it is different, i.e. /usr/hadoop

so while running ./start-all.sh from master namenode and jobtarcker started, and datanode started on one slave that is having same hadoop path as master node,but on other slave node it is giving error like--

ngs-dell: bash: line 0: cd: /home/hduser/hadoop/libexec/..: No such file or directory

means it is searching on same path as master, but it have different path.

Please tell me how to solve this issue.

And one more doubt, is it compulsary that all hadoop node (master & slave) should have same username, in my case it is hduser. If I change on one node of hadoop cluster then it gives me error.

1
easiest way could be create same path and then create symbolic link from actual path. - abhinav

1 Answers

0
votes

I think you may not change the 'hadoop.tmp.dir' settings of core-site.xml in the slave node.

you can check the answer in this post