0
votes

I have 9 ignite server instances I0, I1,..., I8 which has cache in PARTITIONED mode, in which I am loading data in parallel from partitions P0, P1.....P8 in kafka. Here partition P0, P1....P8 contains number of entries which can be uniquely identified by field seq_no, also I am using part_ID for collocating entries from one partition to one instance only. I have defined key as,

 class key() 
 { 
      int seq_no; 
      @AffinityKeyMapped 
      int part_ID; //for collocating entries from one partition to one instance only  
 } 

So, I am trying to achieve one to one mapping between cache entries in ignite instances and partitions e.g. I0->P0, I1->P1, .......,I8->P8. But in my case mapping I am getting is,

 I0-> NULL(No Entries), 
 I1-> P5, 
 I2-> NULL, 
 I3-> P7, 
 I4-> P2, P6 
 I5-> P1 
 I6-> P8 
 I7-> P0, P4 
 I8-> P3 

Affinity collocation part is achieved here i.e. entries with same partition ID gets cached on same ignite instance. But, data is not equally distributed among ignite instances i.e. I4 and I7 holds 2 partitions' data whereas I0 and I2 does not contain any data. So here how can we achieve equal distribution of data so that each ignite instance gets one partition data?

2

2 Answers

0
votes
  1. Could you try removing the affinity key and check if the data is distributed equally among all the nodes?
  2. Check if all the Ignite servers are part of the same Ignite cluster and that all of them have the same heap space allocated to them. One of the reason that this might happen is that server 0 and server 2 might not have enough heap space.

Also, if the answer to point 1 is yes, then I guess you would have to implement your own affinity function