Hadoop version: Hadoop 2.5.0-cdh5.3.1
mapper and reducer script are shell files
some parts of printed log:
AttemptID:attempt_1437751786759_1557_m_007335_0 Timed out after 600 secs 2015-08-21 19:46:55,837 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1372)) - map 76% reduce 0% 2015-08-21 19:46:57,066 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1372)) - map 100% reduce 100% 2015-08-21 19:47:03,159 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1372)) - map 97% reduce 100% 2015-08-21 19:47:04,372 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1372)) - map 100% reduce 100% 2015-08-21 19:47:04,794 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1385)) - Job job_1437751786759_1557 failed with state FAILED due to: Task failed task_1437751786759_1557_m_001557 Job failed as tasks failed. failedMaps:1 failedReduces:0
2015-08-21 19:47:04,922 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1390)) - Counters: 34
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=1415074916
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=501146186
HDFS: Number of bytes written=0
HDFS: Number of read operations=22986
HDFS: Number of large read operations=0
HDFS: Number of write operations=0 Job Counters
Failed map tasks=1137
Killed map tasks=1483
Launched map tasks=10282
Other local map tasks=10438
Total time spent by all maps in occupied slots (ms)=10996762530
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=5498381265
Total vcore-seconds taken by all map tasks=5498381265
Total megabyte-seconds taken by all map tasks=5630342415360
Map-Reduce Framework
Map input records=7662
Map output records=189860
Map output bytes=8829322
Map output materialized bytes=101153057
Input split bytes=988398
Combine input records=0
Spilled Records=189860
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=450437
CPU time spent (ms)=129978840
Physical memory (bytes) snapshot=3951235211264
Virtual memory (bytes) snapshot=13755897688064
Total committed heap usage (bytes)=3860902445056
File Input Format Counters
Bytes Read=500157788
2015-08-21 19:47:04,922 ERROR [main] streaming.StreamJob
(StreamJob.java:submitAndMonitorJob(1019)) - Job not successful!
Streaming Command Failed!
Besides In the tracking url,got these logs :
++ date +%Y%m%d%H%M%S + /home/disk1/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop/bin/hadoop >dfs -D speed.limit.kb=9000 -put ./sites_url hdfs://nameservice1/user/rp->product/dma/newsites/url/ccdb/20150821185246..sites_url
DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it.
put: No lease on /user/rp-product/dma/newsites/url/ccdb/20150821185246..sites_url.COPYING (inode 913353): File does not exist. Holder DFSClient_NONMAPREDUCE_39002115_1 does not have any open files.
++ cat sele_url
++ wc -l
+ cn=32
+ (( 32>0 ))
+ cat sele_url
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.metrics2.impl.MetricsSystemImpl).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
And I also find java source code of hadoop Here
I searched Google for solutions but failed, and didn't get some useful info from log to guess some possible causes, so I need help or any hints
Thank you very much
Best regards!