2
votes

-- Solved problem by changing state backend from filesystem to rocksdb --

Running Flink 1.9 atop on AWS EMR. Flink app uses kinesis stream as input data and another kinesis stream as output. Recently the checkpoint size has grown to 1 gigabyte (due to more data). Sometimes, during an attempt to take a checkpoint - the application begins to utilize the entire processor resource (occurs several times a day)

Metrics:

LA (emr ec2 core node with job/task managers)

Run Loop Time - kinesis consumer

Records Per Fetch - kinesis consumer

Task manager GC

jobmanager logs

{"level":"INFO","timestamp":"2020-08-25 04:55:27,399","thread":"Checkpoint Timer","file":"CheckpointCoordinator.java","line":"617","message":"Triggering checkpoint 1232 @ 1598331327244 for job 0039825bafae26bc34db88e037a1dae3."}

{"level":"INFO","timestamp":"2020-08-25 04:58:24,509","thread":"flink-akka.actor.default-dispatcher-7010","file":"ResourceManager.java","line":"1144","message":"The heartbeat of TaskManager with id container_1597960565773_0003_01_000002 timed out."}

{"level":"INFO","timestamp":"2020-08-25 04:58:24,510","thread":"flink-akka.actor.default-dispatcher-7010","file":"ResourceManager.java","line":"805","message":"Closing TaskExecutor connection container_1597960565773_0003_01_000002 because: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002  timed out."}
{"level":"INFO","timestamp":"2020-08-25 04:58:24,514","thread":"flink-akka.actor.default-dispatcher-7015","file":"Execution.java","line":"1493","message":"Sink: kinesis-events-sink (1/1) (573401e241fe0a0ac0a8a54c81c4eefd) switched from RUNNING to FAILED."}
java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002  timed out.
        at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1146)
        at org.apache.flink.runtime.heartbeat.HeartbeatMonitorImpl.run(HeartbeatMonitorImpl.java:109)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
        at akka.actor.ActorCell.invoke(ActorCell.scala:561)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{"level":"INFO","timestamp":"2020-08-25 04:58:24,515","thread":"flink-akka.actor.default-dispatcher-7015","file":"ExecutionGraph.java","line":"1324","message":"Job JOB_NAME_HIDDEN (0039825bafae26bc34db88e037a1dae3) switched from state RUNNING to FAILING."}
java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002  timed out.
        at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1146)
        at org.apache.flink.runtime.heartbeat.HeartbeatMonitorImpl.run(HeartbeatMonitorImpl.java:109)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
        at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
        at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
        at akka.actor.ActorCell.invoke(ActorCell.scala:561)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

task managers logs

{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:02.469832] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:127] (sensing-events-test) Aver
age Processing Time: -nan ms"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469922] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:109] Stage 1 Triggers: { stream
: 'sensing-events-test', manual: 0, count: 0, size: 0, matches: 0, timed: 0, UserRecords: 0, KinesisRecords: 0 }"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469977] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:112] Stage 2 Triggers: { stream
: 'sensing-events-test', manual: 0, count: 0, size: 0, matches: 0, timed: 0, KinesisRecords: 0, PutRecords: 0 }"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469992] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:127] (sensing-events-test) Aver
age Processing Time: -nan ms"}
{"level":"ERROR","timestamp":"2020-08-25 04:58:28,535","thread":"flink-akka.actor.default-dispatcher-628","file":"FatalExitExceptionHandler.java","line":"40","message":"FATAL: Thread 'flink-akka.actor.default-dispatcher-628' produced an uncaught exception. Stopping the pr
ocess..."}
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at sun.reflect.UTF8.encode(UTF8.java:36)
        at sun.reflect.ClassFileAssembler.emitConstantPoolUTF8(ClassFileAssembler.java:103)
        at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:331)
        at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:112)
        at sun.reflect.ReflectionFactory.generateConstructor(ReflectionFactory.java:398)
        at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:360)
        at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1588)
        at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:79)
        at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:519)
        at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:494)
        at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:391)
        at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:681)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1941)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1807)
        at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1595)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
        at org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.readObject(RemoteRpcInvocation.java:204)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2234)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2125)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
        at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)

upd. flink-conf.yaml

state.backend.fs.checkpointdir: s3a://s3-bucket-with-checkpoints/flink-checkpoints
taskmanager.numberOfTaskSlots: 1
state.backend: filesystem
taskmanager.heap.size: 3057m
state.checkpoints.dir: s3a://s3-bucket-with-checkpoints/external-checkpoints

Flink Checkpoints

2

2 Answers

1
votes

Solved problem by changing state backend from filesystem to rocksdb

0
votes

I think, this might be related to the SlidingEventTimeWindow, which as far as I understand from the checkpoint screenshot is a window of size 2 minutes with a 2-second window slide. Flink creates one copy of each element per window to which it belongs. Thus, in your case for sliding window it creates about 60 copies of element and therefore the state size is 60x times bigger then for a tumbling window.

I guess, on checkpoint flink tries to serialize state and there is not enough memory therefore the GC starts and finally you run out of memory.