2
votes

I am learning hadoop by following michael-noll tutorials. When I tried to run wordcount example by running hadoop jar hadoop-examples-1.2.1.jar wordcount tmp/Files tmp/Output I am getting following error:

13/11/10 18:09:42 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54311. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

. .

13/11/10 18:09:51 ERROR security.UserGroupInformation: PriviledgedActionException as:hduser cause:java.net.ConnectException: Call to localhost/127.0.0.1:54311 failed on connection exception: java.net.ConnectException: Connection refused java.net.ConnectException: Call to localhost/127.0.0.1:54311 failed on connection exception: java.net.ConnectException: Connection refused at

org.apache.hadoop.ipc.Client.wrapException(Client.java:1142) at org.apache.hadoop.ipc.Client.call(Client.java:1118) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422) at org.apache.hadoop.mapred.JobClient.createProxy(JobClient.java:559) at org.apache.hadoop.mapred.JobClient.init(JobClient.java:498) at org.apache.hadoop.mapred.JobClient.(JobClient.java:479) at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapreduce.Job.connect(Job.java:561) at org.apache.hadoop.mapreduce.Job.submit(Job.java:549) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.hadoop.examples.WordCount.main(WordCount.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:160) caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583) at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249) at org.apache.hadoop.ipc.Client.call(Client.java:1093) ... 33 more

Addendum

I just re-ran the commands like this bin/stop-all.sh, bin/start-all.sh, hadoop jar hadoop-examples-1.2.1.jar wordcount tmp/Files tmp/Output. But now am getting following error :

13/11/10 20:52:12 ERROR security.UserGroupInformation: PriviledgedActionException as:hduser cause:org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode at org.apache.hadoop.mapred.JobTracker.checkSafeMode(JobTracker.java:5188) at org.apache.hadoop.mapred.JobTracker.getStagingAreaDir(JobTracker.java:3677) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode at org.apache.hadoop.mapred.JobTracker.checkSafeMode(JobTracker.java:5188) at org.apache.hadoop.mapred.JobTracker.getStagingAreaDir(JobTracker.java:3677) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Plz help

2
you have to start hadoop cluster before running map reduce programRamesh Kotha
i had executed this command before running it >bin/start-all.shnbbk

2 Answers

1
votes

Try to manually turn off safe mode:

hadoop dfsadmin -safemode off

or

hadoop dfsadmin -safemode leave

Then rerun your job.

0
votes

This is because your JobTracker is in safemode, and not the nameNode. Use the following command to make sure JT is not in safemode :

bin/hadoop mradmin -safemode leave

You can anytime use the below shown commands to check if your NN and JT are out of the safemode or not ;

bin/hadoop mradmin -safemode get

bin/hadoop dfsadmin -safemode get

Also, please make sure that you are using correct user to start the daemons.

enter image description here

@Praveen Sripati : Please see the second last option.