2
votes

When trying to construct the local pseudo Hadoop environment, I have this errors when i'm trying to start my namenode with start-dfs.sh

"Could not find or load main class org.apache.hadoop.hdfs.tools.GetConf"

My java version is as shown below

java version "1.7.0_85"
OpenJDK Runtime Environment (IcedTea 2.6.1) (7u85-2.6.1-5ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)

I have also changed the line in my hadoop-env.sh, under /usr/local/hadoop-2.7.1/etc/hadoop

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

For etc/hadoop/core-site.xml, i put

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

For etc/hadoop/hdfs-site.xml, i put

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

I have also changed my /home/hduser/.bashrc file, add lines as below:(all paths are correct)

#HADOOP VARIABLES START
export HADOOP_PREFIX =/usr/local/hadoop-2.7.1
export HADOOP_HOME=/usr/local/hadoop-2.7.1
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_PREFIX}/lib/native"
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar
#HADOOP VARIABLES END

When type in start-dfs.sh, only datanode shows, and when start-all.sh. Nodemanager and datanode shows.

6098 NodeManager
5691 DataNode
6267 Jps

Nothing shows from the http://localhost:*****/

1
Check your JAVA_HOME path and try ./hadoop-daemon.sh start namenode to start namenode and jps command to checkNagendra

1 Answers

2
votes

Format your namenode first by using this command hadoop namenode -format and then try to execute this from your terminal ./hadoop-daemon.sh start namenode.
jps command to check.

core-site.xml:

<configuration> 
  <property> 
    <name>fs.default.name</name> 
    <value>hdfs://localhost:9000</value> 
  </property> 
</configuration> 

hdfs-site.xml:

<configuration> 
 <property> 
  <name>dfs.replication</name> 
  <value>1</value> 
 </property> 
 <property> 
  <name>dfs.namenode.name.dir</name> 
  <value>/path/hadoop/namenode</value> 
 </property> 
 <property> 
  <name>dfs.datanode.data.dir</name> 
  <value>/path/hadoop/datanode</value> 
</property> 
</configuration>