0
votes

My spark job fails with following error : Diagnostics: Container [pid=7277,containerID=container_1528934459854_1736_02_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 3.1 GB of 6.9 GB virtual memory used. Killing container.

1

1 Answers

0
votes

Your containers are getting killed. This happens when your Yarn memory is not as much as required to perform the task. So, the possible solution is to increase Yarn memory.

You have 2 choices:

  1. Either increase the current memory size of your node manager
  2. Or assign a new Node manager on one more Datanode.

It will increase the Yarn Memory and make sure it's around 2 GB at least.