This is just one way to do it, but there are several other ways. This method uses Records, and otherwise avoids modifying the underlying data - it simply ignores the fields you don't want during the insert. This is beneficial when integrating with a larger Flow, where the data is used by other Processors that might expect the original data, or where you are already using Records.
Let's say your Table has the columns
id | name | value
and your data looks like
101,AAA,1000,10
102,BBB,5000,20
You could use a PutDatabaseRecord processor with Unmatched Field Behavior and Unmatched Column Behavior set to Ignore Unmatched... and add a CSVReader as the Record Reader.
In the CSVReader you could set the Schema Access Strategy to Use 'Schema Text' Property. Then set the Schema Text property to the following:
{
"type": "record",
"namespace": "nifi",
"name": "db",
"fields": [
{ "name": "id", "type": "string" },
{ "name": "name", "type": "string" },
{ "name": "ignoredField", "type": "string" },
{ "name": "value", "type": "string" }
]
}
This would match the NiFi Record fields against the DB Table columns, which would match fields 1,2 and 4 while ignoring fields 3 (as it did not match a column name).
Obviously, amend the field names in the Schema Text schema to match the Column names of your DB Table. You can also do data types checking/conversion here.
PutDatabaseRecord
CSVReader