4
votes

I'm trying to install hadoop in pseudo distributed mode by following this instruction http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html

but i keep getting

Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to 
/home/ec2-user/hadoop-2.4.0/logs/hadoop-ec2-user-secondarynamenode-ip-x-x-x-x.out

i'm just copy pasting the configuration(.xml) and use start-dfs.sh command.

other who experienced this seems to point out typo in configuration file, but i can't see anything wrong. below is my config file

etc/hadoop-2.4.0/core-site.xml:

<configuration>
 <property>
     <name>fs.defaultFS</name>
     <value>hdfs://localhost:9000</value>
 </property>
</configuration>

etc/hadoop-2.4.0/hdfs-site.xml:

<configuration>
 <property>
     <name>dfs.replication</name>
     <value>1</value>
 </property>
</configuration>

etc/hadoop-2.4.0/mapred-site.xml:

<configuration>
 <property>
     <name>mapred.job.tracker</name>
     <value>localhost:9001</value>
 </property>
</configuration>

What am i doing wrong??

1

1 Answers

0
votes

So what is the problem with that, let it start secondary name node. Do you get any exception ? OR any other issues like HDFS not starting etc?

In core-site.xml you should have

<configuration>
   <property>
    <name>fs.default.name</name>
    <value>hdfs://ipaddress:port</value>
  </property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/$user/hdfs/tmp</value>
  <description>A base for other temporary directories.</description>
</property>
</configuration>

Refer to this answer for more config details.