2
votes

I have configured a flow as follows:

  1. GetFile
  2. SplitText -> splitting into flowfiles
  3. ExtractText -> adding attributes with two keys
  4. PutDistributedMapCache -> Cache Entry Identifier is ${Key1}_${Key2}

Now I configured one sample GenerateFlowFile which generates a sample record and then goes into LookupRecord ( concat(/Key1,'_',/Key2)) which looks for the same key in cache.

I see a problem in my caching flow because when I configure a GenerateFlowFile to cache same records , I am able to do lookup

This flow is not able to lookup. Please help

Flow is: enter image description here

PutDistributedMapCache

enter image description here

ExtractText

enter image description here

Lookup flow

enter image description here

LookupRecord Config

enter image description here

I have added four keys in total because that is my business use case.

I have a csv file with 53 records and I use Splitfile to split each record and add attributes which act as my key which I store in PutDistributedMapcache. Now I have a different flow where in I start with a GenerateFlowFile which generates a record like this :

enter image description here

So I expect my LookupKeyRecord which has a jsonreader and jsonwriter to read this record , lookup with the key in the distributedcache and populate the /Feedback field in my record.

This fails to look up records and records goes as UNMATCHED.

Now the catch is lets say I remove GetFile and use a GenerateFlowFile with this config to cache :

enter image description here

so my lookup works with the keys 9_9_9_9. But the moment I add another set of records with different keys , my lookup fails.

1
what is this: concat(/Key1,'_',/Key2) ? Could you edit your question and provide all parameters of the LookupRecord and PutDistributedMapCache processors - daggett
I have added the configs - Aviral Kumar
@daggett . Can you suggest me with this problem - Aviral Kumar
now describe your problem. provide example of json + avro schema for it. why you have to use LookupRecord instead of PutDistributedMapCache? the point that i can see: according to documentation your Record Path must contain the key 'key'. so, it should look like: /key[concat(...)]/..., but to provide a full answer example of json+format is required. - daggett
I have added the details. - Aviral Kumar

1 Answers

2
votes

I figured it out , my DistributedMapCache server was having a default config of Max Cache Entries as 1. I increaded it , its working now :)