I have a .netcore processing AWS Lambda that writes logs in one AWS CloudWatch log stream in json format. Approx it writes around 200/400 log entries each contains around 800 lines (25kb max) of json data.
Once the processing Lambda done with the logs, another export Lambda writes the log to an excel file and export the file to S3. Problem is, it only writes about 110 logs when CloudWatch has 200 entries. I can see in console that there are 200 log entries, but excel file only contains about 110 to 116 rows.
Before exporting to excel I first check the count of log entries and I noticed that count is not coming correct. It should be 200 but wrongly coming as 110. Code I am writing is:
using (AmazonCloudWatchLogsClient client = new AmazonCloudWatchLogsClient("xxx", "xxx", "xx-xxxx-2"))
{
var request = new FilterLogEventsRequest()
{
LogGroupName = GroupName,
LogStreamNames = new List<string>() { StreamName }
};
Task<FilterLogEventsResponse> response = client.FilterLogEventsAsync(request);
response.Wait();
if (null != response.Result && null != response.Result.Events)
result = response.Result.Events.Count;
}
I further tested that when I write simple text instead of long json data, I get proper count in above code. But, when log contains long json data; I receive incorrect count. For testing, when I strip json data to around less than 200 lines, I get correct count.
So, is there any limit? Is FilterLogEventsAsync() function unable to get all logs when there is a large amount of json data (25kb or 800 lines) written in there?