1
votes

I have set up a two node hadoop cluster. I started the hadoop file sytem and map reduced daemons without error and verified they are running on the master and slave. I am able to read the input files from the master and slave nodes using the command bin/hadoop dfs -getmerge hdfs://my.domain.com:54310/user/wordcount/sunzi.txt /tmp/wordcount. When I run the map reduce job I see errors in the output. The job eventually completed but the reduce part takes a very long time and it keeps on going back to the map task every time the error is printed. My site configuration files reference the dns name of the master, so I don't know why the job is trying to read the task output from 'localhost'

12/12/20 10:47:36 INFO input.FileInputFormat: Total input paths to process : 7
12/12/20 10:47:36 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12/12/20 10:47:36 WARN snappy.LoadSnappy: Snappy native library not loaded
12/12/20 10:47:36 INFO mapred.JobClient: Running job: job_201212201046_0001
12/12/20 10:47:37 INFO mapred.JobClient:  map 0% reduce 0%
12/12/20 10:47:44 INFO mapred.JobClient:  map 42% reduce 0%
12/12/20 10:47:45 INFO mapred.JobClient:  map 57% reduce 0%
12/12/20 10:47:49 INFO mapred.JobClient:  map 71% reduce 0%
12/12/20 10:47:50 INFO mapred.JobClient:  map 100% reduce 0%
12/12/20 10:47:54 INFO mapred.JobClient:  map 100% reduce 2%
12/12/20 10:48:08 INFO mapred.JobClient: Task Id : attempt_201212201046_0001_m_000002_0, Status : FAILED
Too many fetch-failures
12/12/20 10:48:08 WARN mapred.JobClient: Error reading task outputhttp://localhost:50060/tasklog?plaintext=true&attemptid=attempt_201212201046_0001_m_000002_0&filter=stdout
12/12/20 10:48:08 WARN mapred.JobClient: Error reading task outputhttp://localhost:50060/tasklog?plaintext=true&attemptid=attempt_201212201046_0001_m_000002_0&filter=stderr
12/12/20 10:48:08 INFO mapred.JobClient: Task Id : attempt_201212201046_0001_m_000001_0, Status : FAILED
Too many fetch-failures
12/12/20 10:48:09 INFO mapred.JobClient:  map 85% reduce 2%
12/12/20 10:48:10 INFO mapred.JobClient:  map 71% reduce 2%
12/12/20 10:48:11 INFO mapred.JobClient:  map 85% reduce 2%
12/12/20 10:48:12 INFO mapred.JobClient:  map 100% reduce 2%
12/12/20 10:48:33 INFO mapred.JobClient:  map 100% reduce 3%
12/12/20 10:48:34 INFO mapred.JobClient:  map 100% reduce 4%
12/12/20 10:48:47 INFO mapred.JobClient: Task Id : attempt_201212201046_0001_m_000003_0, Status : FAILED
Too many fetch-failures

I see this in the task tracker log:

2012-12-20 10:51:22,255 WARN org.apache.hadoop.mapred.TaskTracker: Unknown child with bad map output: attempt_201212201046_0001_m_000005_2. Ignored.
2012-12-20 10:51:22,256 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 127.0.0.1:50060, dest: 127.0.0.1:49774, bytes: 0, op: MAPRED_SHUFFLE, cliID: attempt_201212201046_0001_m_000005_2, duration: 1870835
2012-12-20 10:51:22,257 WARN org.mortbay.log: /mapOutput: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/apollo/jobcache/job_201212201046_0001/attempt_201212201046_0001_m_000005_2/output/file.out.index in any of the configured local directories
2012-12-20 10:51:23,225 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000002_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:26,239 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000002_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:26,372 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000000_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:32,255 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000002_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:32,387 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000000_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:35,401 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201212201046_0001_r_000000_0 0.19047621% reduce > copy (4 of 7 at 0.00 MB/s) > 
2012-12-20 10:51:37,116 WARN org.apache.hadoop.mapred.TaskTracker: getMapOutput(attempt_201212201046_0001_m_000005_2,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/apollo/jobcache/job_201212201046_0001/attempt_201212201046_0001_m_000005_2/output/file.out.index in any of the configured local directories
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
        at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:4009)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
        at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:848)
        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
1

1 Answers

3
votes

For future googlers, I was able to fix this problem by removing the following entry from my /etc/hosts file

127.0.0.1 localhost

I do not see why Hadoop was trying to use localhost to begin with though.