0
votes

I have this issue on a AWS EMR cluster (4 core m3.xlarge) to process a 40GB text file. FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space

it occurs during the map process. The jobs starts then fails after few minutes. emr-4.4.0, Amazon 2.7.1, Pig 0.14.0

I've tried these commands with different values, but the issue still occurs:

  • pig -Dmapreduce.map.java.opts=-Xmx2304m -Dmapred.child.java.opts=-Xmx3072m script.pig
  • pig -Dmapreduce.map.java.opts=-Xmx3328m -Dmapred.child.java.opts=-Xmx4096m -Dmapreduce.map.memory.mb=5120 script.pig

I'm running out of ideas ... any suggestions ?

2016-03-26 08:05:06,087 INFO [main] amazon.emr.metrics.MetricsSaver: 1 aggregated HDFSReadBytes 63 raw values into 5 aggregated values, total 5 2016-03-26 08:05:17,518 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2271) at org.apache.hadoop.io.Text.setCapacity(Text.java:266) at org.apache.hadoop.io.Text.append(Text.java:236) at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:243) at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185) at org.apache.pig.builtin.TextLoader.getNext(TextLoader.java:58) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:565) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:172) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166)

2016-03-26 08:05:17,621 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2016-03-26 08:05:17,622 INFO [cloudwatch] org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: cloudwatch thread interrupted. 2016-03-26 08:05:17,625 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2016-03-26 08:05:17,625 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete.

1
Could you post your script?vlahmot

1 Answers

0
votes

I've found out why I had this issue. In my text file I had few lines with this character ^@^@^@^@^@^@^@^@^@^ which generated lines with a huge length. Once removed, it works fine

https://superuser.com/questions/75130/how-to-remove-this-symbol-with-vim