0
votes

I have one query regarding inserting record in database using file endpoint. I want to insert json type record in db. I create json file and all those file data i inserted into database. My query is i can insert all those data in database successfully but that is continuously inserted data and error occurred Duplicate entry '1' for key 'PRIMARY'
How can i solve this error?I don't want to insert data recursively.How can i do this only once? I used following flow

**File->Json to Object->Splitter->Database**

please help me

3

3 Answers

1
votes

You can use an Idempotent Message Filter (after the Splitter) to ensure that duplicate entries are discarded. If you json representation has an unique identifier, use the Idempotent Message Filter

<idempotent-message-filter idExpression="#[entry.id]">
    <simple-text-file-store directory="./idempotent"/>
 </idempotent-message-filter>

Otherwise, use the Idempotent Secure Hash Message Filter (which will filter messages based on their hash value)

<idempotent-secure-hash-filter messageDigestAlgorithm="SHA26">
    <simple-text-file-store directory="./idempotent"/>
</idempotent-secure-hash-message-filter>

Please check the following reference for more info.

1
votes

Personally I would try to avoid an idempotent filter with a simple message store as it will prevent potential ulterior updates of the data in the DB.

If your DBMS suports it I would try using an UPSERT mechanism that will effectively render your query idempotent. This could be done with this in postgresql and with this in mysql.

0
votes

You can check the duplicates easily using .ack queries in Mule...

.ack are the query that runs immediate after normal query automatically ...

You need to create .ack query which will run immediately after your insert query and will check the rows already inserted and set the flag...

Check here how to do it with .ack query :- http://training.middlewareschool.com/mule/database-transport/
and here :- http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-Acknowledgment