I am using a flow as follows(basically to fetch a file from s3 and then convert few records from the main CSV file n later push it to Elasticsearch) : GetSQS ->UpdateAtttribute->SplitJson->EvaluateJsonPath->UpdateAttribute->convertRecord-> other processor...
I am able to fetch the file from s3 correctly but the ConvertRecord processor thows error: Invalid char between encapsulated token a delimiter
Please find the ConvertRecord Configs below:
**CSVRecordReader** : Schema Access strategy as "Use 'Schema Text' Property
Schema Text:
{
"type": "record",
"name": "AVLRecord0",
"fields" : [
{"name": "TimeOfDay","type": "string", "logicalType":"timestamp-millis"},
{"name": "Field_0", "type": "double"},
{"name": "Field_1", "type": "double"},
{"name": "Field_2", "type": "double"},
{"name": "Field_3", "type": "double"}}
]
}
**CSVRecordWritter**:
Schema Write Strategy : Set 'Avro. schema' Attribute
Schema Access Strategy: Use Schema Text Property
Please tell me why am i not able to see the converted record after succesfully fetching from S3.
The desired output is CSV format only. Please find attached sample file uploaded on s3 and I want to convert only upto field_5.
Attached the contoller services screenshots:
Thank you!




extra quote/comma charactersembedded(or not in valid csv format) 2.changeignore csv Header column namesproperty to True in csv reader controller service (as you are not using csv file header). Please let us know what's your findings - notNull