3
votes

Tyring to run some hadoop program. I see NameNode, Datanode, Yarn cluster URL up and running. i.e. 127.0.0.1:50070 /dfshealth.jsp, localhost:8088 /cluster/cluster, etc

But When i try to run my mapreduce program as : $ hadoop MySampleProgram hdfs://localhost/user/cyg_server/input/myfile.txt hdfs: //localhost/user/cyg_server/output/op

The program fails with logs:

INFO mapreduce.Job (Job.java:monitorAndPrintJob(1295)) - map 0% reduce 0%

INFO mapreduce.Job (Job.java:monitorAndPrintJob(1308)) - Job job_1354496967950_0003 failed with state FAILED due to: Application application_1354496967950_0003 failed 1 times due to AM Container for appattempt_1354496967950_0003_000001 exited with exitCode: 127 due to: .Failing this attempt.. Failing the application.

2012-12-03 07:29:50,544 INFO mapreduce.Job (Job.java:monitorAndPrintJob(1313)) - Counters: 0

When i did through some of the logs i notice this: nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(193)) - Exit code from task is : 127

I am running in Windows 7, with cygwin.

Any input is greatly appreciated.

:::ADDING MORE INFO HERE::: As of now i can see that the following hadoop source while execution [trying to set launch container] fails... I am adding the source URL for that file here.... (note this is not hadoop error but i am pointing out but some thing i am missing).... Class:DefaultContainerExecutor Method:launchContainer Lines: from the start of the method launchContainer to 195 where it print the code.

http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-yarn-server-nodemanager/0.23.1/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java#193

NODE MANAGER LOG EXTRACT

INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(175)) - launchContainer: [bash, /tmp/nm-local-...2936_0003/container_1354566282936_0003_01_000001/default_container_executor.sh]

WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(193)) - Exit code from task is : 127

INFO nodemanager.ContainerExecutor (ContainerExecutor.java:logOutput(167)) -

WARN launcher.ContainerLaunch (ContainerLaunch.java:call(274)) - Container exited with a non-zero exit code 127

Thanks Hari

2
Simply posting your older question into a new one, does not mean that this will fit better. Please have a look what causes a exit code of 127 on your platform and then come back with a SPECIFIC question.Thomas Jungblut
I have reformatted it and more readable. I am not sure whats causing the exit code 127. thats actually why i am posting this question here. I would surely add more info, if i can.hbr
The thing is, that Exit Code 127 can be caused by everything. So you need to provide either some log data, or metrics of your PC.Thomas Jungblut
The exit status 127 of bash show that command is not found. Try run bash /tmp/nm-local-*2936_0003/container_1354566282936_0003_01_000001/default_container_executor.sh manually.pensz
pensz: thanks, thats what i am trying to get at. But this program while exiting clears all tmp directory including that .sh file. So trying to see if i can do something to get that script in that .sh file persist.hbr

2 Answers

3
votes

Hard coding the java home path inside hadoop-env.sh solved the issue for me as follows:

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home
1
votes

I ran into this issue when I tried to use libraries which are not included in the standard Hadoop distribution (org.apache.lucene in my case). Solution was to add missing libraries to yarn classpath using "yarn.application.classpath" configuration property:

    String cp = conf.get("yarn.application.classpath");
    String home=System.getenv("HOME");
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-core/4.4.0/*";
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-analyzers/4.4.0/*";
    cp+=","+home+"/" + ".m2/repository/org/apache/lucene/lucene-analyzers-common/4.4.0/*";
    conf.set("yarn.application.classpath", cp);