0
votes

I was asked to create control table with Informatica. I am a newbie and do not have much knowledge about it. I saw the same kind of stuff in my previous project but don't know the way to create a mapplet for that. So the requirement is that I have to create a mapplet which has the following columns:

-mapping_name

-session_name

-last_run_date --source count --target count --status

So what happens is

Example: We executed a workflow with a particular mapping last week.

Now after 1 week we are executing the same mapping.

The requirement is that we should be fetching only those records which fall in this particular time frame(i.e from previous run to the current run). This is something I do not know.

Can you please help me out? I can provide furthur details if required.

1

1 Answers

0
votes

There is a solution provided in below link but it doesnt use mapplet. See, if you want to use mapplet, you wont get 'status' attribute and mapplet approach can be difficult to implement for all mappings. You can use this link to gather statistics as well. http://powercenternotes.blogspot.com/2014/01/an-etl-framework-for-operational.html

Now, regarding your other requirement, it seems to me to be an issue with incremental extract. So, you need to store the date parameter when you ran your flow last - into a DB table or flat file. Use that as reference and pull anything greater than that date.

Mapplet - We used this approach earlier to gather statistics. But this is difficult because you need to add this mapplet + a reusable generic target to capture stats.

  1. Input -
    Type_of_data- (this can be source, target)
    unique_key - (unique key of the mapping)
    MappingName - $PMMappingName
    SessionName - $PMSessionName

  2. Aggregator -
    i/p-
    Type_of_data
    unique_key
    MappingName group by
    SessionName group by
    o/p-
    count_row = COUNT(*)

  3. Output -
    Type_of_data
    MappingName
    SessionName
    count_row

Use a reusable generic target to capture all the rows. You need to add one set after each source, one set before each target. The approach in the link is better i think.