1
votes

We have created an application with:

  • 2 nodes + 1 Notary
  • 2 web application (one for each node)

Corda Version is 3.2

The CordApp has:

  • the first flow, receiving in input a list of objects, used to create a list of State. This list will be the output of the transaction
  • the list of object (approx 3000 objects) is splitted into lists with size of 450 (because of ActiveMQ Artemis error for lists with bigger size [java.lang.IllegalArgumentException: Record is too large to store])

  • after the first flow, we launch another flow with a similar logic. In this case we have a list of StateAndRef (results of a query with RPCops) received as an input within the flow and used as output for the transaction.

  • either in this case we split the list (approx 3000 objects) in sub lists of 450 elements.

Randomly we received Java Heap Space error and SslHandshakeCompletionEvent(javax.net.ssl.SSLException: handshake timed out) . This seems a serious memory leak.

We can do the entire workflow only using params -Xmx10240m (10GB) on our local machine. Monitoring the resource, it seems that the heap grows esponentially specially during the transactions.

What could be the reason for this crash?

Is not possible to use Corda with lists with this size?

1
Do you get the heap space error when running the first flow or the second flow?Joel
it depends on the -size of Xmx parameter. Like I said I call the first flow in sequence N times (each one with 450 elements). After two Flow submitting, usually it crash.Antonio Grandinetti
And what exactly are you returning via RPC? Do you return the entire transaction/all the states as a list? If you stop doing that, does it help?Joel
I added in the post the way I manage the result of each Flow.Antonio Grandinetti
Thanks for the help. The solution was to add the override of toString and hashCode not only in the Entity class for the States, but either in the class containing these Entities (the one extends MappedSchema). Now the nodes never exceed 1.2 GB of used memory.Antonio Grandinetti

1 Answers

2
votes

The solution was to add the override of toString and hashCode not only in the Entity class for the States, but also in the class containing these Entities (the one extends MappedSchema). Now the nodes never exceed 1.2 GB of used memory.