0
votes

I'm stuck with this problem. I'm using Hadoop (CDHu3). I have tried every possible solution, I found by Googling.

This is the issue:

When I ran Hadoop example "wordcount", the tasktracker's log in one slave node gave following errors:

1.WARN org.apache.hadoop.mapred.DefaultTaskController: Task wrapper stderr: bash: /var/tmp/mapred/local/ttprivate/taskTracker/hdfs/jobcache/job_201203131751_0003/attempt_201203131751_0003_m_000006_0/taskjvm.sh: Permission denied

2.WARN org.apache.hadoop.mapred.TaskRunner: attempt_201203131751_0003_m_000006_0 : Child Error java.io.IOException: Task process exit with nonzero status of 126.

3.WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stdout log for task: attempt_201203131751_0003_m_000003_0 java.io.FileNotFoundException: /usr/lib/hadoop-0.20/logs/userlogs/job_201203131751_0003/attempt_201203131751_0003_m_000003_0/log.index (No such file or directory)

I could not find similar issues in Google. I got some posts seem a little relevant and which suggest:

  1. The ulimit of Hadoop user: My ulimit is set large enough for this bundled example
  2. The memory used by JVM: My JVM uses only Xmx200m, too small to exceed the limit of my machine
  3. The privilege of the mapred.local.dir and logs dir: I set them by "chmod 777"
  4. The disk space is full: There is enough space for Hadoop in my log directory and mapred.local.dir.

How can I solve this problem?

1

1 Answers

0
votes

For me this happended because hadoop wasn't able to create a MapReduce Job logs on hadoop/logs/userlogs/JobID/attemptID

ulimit is of course one of the highest possibility.

but for me it was because the disk we were using was full somehow and creating the log files failed