0
votes

when i start hadoop using start-all.sh after that datanode and secondarynamenode not up on server and on slave datanode not starting. when i troubleshoot using hdfs datanode get this error

    15/06/29 11:06:34 INFO datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/06/29 11:06:35 WARN common.Util: Path /var/lib/hadoop/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration.
15/06/29 11:06:35 FATAL datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:70)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
        at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
        at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:299)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2152)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
        ... 9 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
        at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:39)
        ... 14 more
15/06/29 11:06:35 INFO util.ExitUtil: Exiting with status 1
15/06/29 11:06:35 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localserver39/10.200.208.28

what is issue with my datanode on slave and on master secondarynamenode ?

start-dfs.sh on master

get this as output

[email protected]'s password: 10.200.208.28: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-localserver39.out
10.200.208.28: nice: /usr/libexec/../bin/hdfs: No such file or directory
Starting secondary namenodes [0.0.0.0]
[email protected]'s password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-MC-RND-1.out

After Jps get this

bash-3.2$ jps
8103 Jps
7437 DataNode
7309 NameNode

core-site.xml

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://10.200.208.29:9000/</value>
</property>

</configuration>

hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

<property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>

<property>
   <name>dfs.datanode.data.dir</name>
   <value>/Backup-HDD/hadoop/datanode</value>
</property>

<property>
        <name>dfs.namenode.data.dir</name>
        <value>/Backup-HDD/hadoop/namenode</value>
</property>


<property>
  <name>dfs.name.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>/Backup-HDD/hadoop/datanode</value>
</property>
2
Is your Backup-HDD a mounted filesystem? - Rajesh N
Don't use mounted filesystem for this purpose. It might result in permission issues. Try path like /home/hadoop/namenode/ and /home/hadoop/datanode/. Make sure you create these folders and give read and write permissions. - Rajesh N
issue is i don't have enough space in home directory. thats y i used RAID HD... - dilshad
hi, pls use <property> <name>dfs.datanode.data.dir</name> <value>file:/var/lib/hadoop/hdfs/datanode</value> </property> - karthik
As @karthik said, try /var/lib/hadoop/hdfs/datanode or you could try /usr/local/hadoop/namenode. Post the result for df - h in your question because you have mentioned no free space in home directory. - Rajesh N

2 Answers

0
votes

Remove the below properties from hdfs-site.xml,

<property>
   <name>dfs.datanode.data.dir</name>
   <value>/Backup-HDD/hadoop/datanode</value>
</property>

<property>
    <name>dfs.namenode.data.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
    <name>dfs.name.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
    <name>dfs.data.dir</name>
    <value>/Backup-HDD/hadoop/datanode</value>
</property>

Add the below two properties in hdfs-site.xml

<property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/user/Backup-HDD/hadoop/datanode</value>
</property>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/user/Backup-HDD/hadoop/namenode</value>
</property>

Make sure path specified in the name and data dir are exists in you system.

0
votes

Problem Solved after search on google

Update .bashrc and .bash_profile
cat .bashrc
#!/bin/bash
unset all HADOOP environment variables
env | grep HADOOP | sed 's/.(HADOOP[^=])=.*/\1/' > un_var
while read line; do unset "$line"; done < un_var
rm un_var
export JAVA_HOME="/usr/java/latest/"
export HADOOP_PREFIX="/home/hadoop/hadoop"
export HADOOP_YARN_USER="hadoop"
export HADOOP_HOME="$HADOOP_PREFIX"
export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export HADOOP_PID_DIR="$HADOOP_PREFIX"
export HADOOP_LOG_DIR="$HADOOP_PREFIX/logs"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
export YARN_HOME="$HADOOP_PREFIX"
export YARN_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export YARN_PID_DIR="$HADOOP_PREFIX"
export YARN_LOG_DIR="$HADOOP_PREFIX/logs"
export YARN_OPTS="$YARN_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
cat .bash_profile
#!/bin/bash
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi

Issue with Bash Profile