0
votes

I am attempting to set up a hadoop pseudo-distributed cluser on CentOS 6.5. The version of Hadoop that I am using is 0.20. I am also using Apache Pig version 0.12.1.

I modified the following conf files:

core-site.xml

    <property>
            <name>fs.default.name</name>
            <value>hdfs://localhost:8020</value>
    </property>

hdfs-site.xml

<configuration>
<property>
    <name>fs.default.name</name>
    <value>1</value>
</property>

<property>
    <name>dfs.permissions</name>
       <value>false</value>
</property>
</configuration>

mapred-site.xml

<configuration>
    <property>
            <name>mapred.job.tracker</name>
            <value>127.0.0.1:8021</value>
    </property>
</configuration>

So, after I configured the appropriate files, I issued the command hadoop namenode -format as well as sh start-all.sh. However, after running the jps command, I see that the namenode, secondaryname, and datanode all run, but only for a short time. Looking at the log files, I see this:

2014-11-28 20:32:59,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:     java.io.IOException: Call to /0.0.0.1:8020 failed on local exception: java.net.SocketException: Invalid argument
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)

How would I go about fixing this issue?

2

2 Answers

0
votes

You may find that you need to use the hostname of the machine instead of localhost or 127.0.0.1.

0
votes

You should specify your namenode with the appropriate port

 <name>fs.default.name</name>
 <value>hdfs://localhost:9000</value>