33
votes

After installing hadoop 2.2 and trying to launch pipes example ive got the folowing error (the same error shows up after trying to launch hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt /out):

/usr/local/hadoop$ hadoop pipes -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output /out -program bin/wordcount
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

My yarn-site.xml:

<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>

core-site.xml:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

mapred-site.xml:

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
</configuration>

hdfs-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>

Ive figured out that my IPv6 is disabled as it should be. Maybe my /etc/hosts are not correct?

/etc/hosts:

fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.0.1 localhost.localdomain localhost hduser
# Auto-generated hostname. Please do not remove this comment.
79.98.30.76 356114.s.dedikuoti.lt  356114
::1             localhost ip6-localhost ip6-loopback
10
What is your java version? Please note that none of the versions of hadoop till now support Java versions > 8.schmi

10 Answers

27
votes

The problem connecting recource manager was because ive needed to add a few properties to yarn-site.xml :

<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>127.0.0.1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>127.0.0.1:8031</value>
</property>

Yet, my Jobs arent runing but connecting is successful now

16
votes

Make sure you've started Yarn. Use this command to start it:

start-yarn.sh

Then use this command to verify that the Resource Manager is running:

jps

The output should look something like this:

17542 NameNode

17920 SecondaryNameNode

22064 Jps

17703 DataNode

18226 ResourceManager

18363 NodeManager

3
votes

The proper way might be adding the following lines in yarn-site.xml :

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>127.0.0.1</value>
</property>

Because the value field host represent a single hostname that can be set in place of setting all yarn.resourcemanager* address resources. Results in default ports for ResourceManager components.

Apache Hadoop 2.7.1 - Configurations for ResourceManager

  • Parameter: yarn.resourcemanager.hostname
  • Value: ResourceManager host.
  • Notes: host Single hostname that can be set in place of setting all yarn.resourcemanager*address resources. Results in default ports for ResourceManager components.
1
votes

I had faced the same problem. I solved it.

As there is a problem connecting to ResourceManager, so, make sure Yarn is running or not. Yarn is split up to different entities. One of them is ResourceManager which is responsible for allocating resources to the various applications running in the cluster.

Do the Following steps.

  1. Start Yarn by using command: start-yarn.sh
  2. Check Resource Manager nod by using command: jps
  3. Add the following code to the configuration
<property>
    <name>yarn.resourcemanager.address</name>
    <value>127.0.0.1:8032</value>
</property>
1
votes

I resolved the same problem by changing the value to 127.0.0.1:* in yarn-site.xml

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
    <name>yarn.resourcemanager.address</name>
    <value>127.0.0.1:8032</value>
     </property>
      <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>127.0.0.1:8030</value>
     </property>
    <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>127.0.0.1:8031</value>
     </property>
0
votes
Configuration conf = HBaseConfiguration.create();
 conf.set("yarn.resourcemanager.address", "127.0.0.1:8032");

In conf you can set yarn.resourcemanager.address

0
votes

This issue might be due to the missing HADOOP_CONF_DIR which is needed by the MapReduce Application to connect to the Resource Manager which is mentioned in yarn-site.xml. So, before running the MapReduce job try to set/export HADOOP_CONF_DIR manually with appropriate Hadoop Conf directory like export HADOOP_CONF_DIR=/etc/hadoop/conf. This way worked for me :)

0
votes

In my case, I had a typo in my xml config file. You can check the logs at $HADOOP_HOME/logs/yarn-rdmaHB-resourcemanager-(yourhostname).log, there may be a helpful stacktrack.

0
votes

This error has occurred because the resource manager has failed to start. If you have done what others have stated to change the configuration file and still getting the error then refer it.

Note:- Windows 10, Hadoop 3.1.3 Verified

So if you are a Windows user goto hadoop-3.1.3/sbin/. Execute stop-all.cmd Then start-all.cmd

Many terminals would have now opened which are nodemanager, datanode, namenode & resource manager. See for the error message in the terminal of the resource manager. That error should be your question.

If the error message is something like this:-
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager

Copy the following file
From ~/hadoop-3.1.3/share/hadoop/yarn/timelineservice
File : hadoop-yarn-server-timelineservice-3.1.3.jar
To ~/hadoop-3.1.3/share/hadoop/yarn

This should solve your issue.

-4
votes

Use below below settings in /etc/hosts, add your hostname in place of your_host_name:

127.0.0.1   localhost
127.0.0.1   your_host_name