Environment:
Kubernetes Cluster: EKS
Logging Agent: FluentBit version 1.2
Destination for FluentBit: AWS Kinesis firehose delivery stream
Fluentbit output plugin: amazon-kinesis-firehose-for-fluent-bit
Description:
We have a setup where a FluentBit (deployed as a daemonset) is putting the logs to the firehose delivery stream. There are 4 pods of FluentBit (one per node/ec2 in the EKS cluster) collecting logs and submitting them to the same firehose. We are in a Canada central region, here for firehose we have a limit of 1 MB/s.
We were getting multiple Throttling errors from the firehose.
The data being sent is not huge, in the CloudWatch, I see apart from some occasional spikes over 1 MB most of the time the consumption is quite low.
I'm really wondering, is this the right setup? To ingest logs from separate FluentBit pods to directly one firehose delivery stream (firehose destination is S3). Because options to control the data outflow rate from FluentBit and the amazon-kinesis-firehose-for-fluent-bit output plugin are very limited. Limitations:
- In the output plugin, I can't control the data outflow rate to the firehose.
- If I set a limit on the input plugin and Fluentbit service then it squeezes each fluentBit agent's capacity to hold and push
I feel if there would be one aggregator collecting logs from all fluentBit agents and only one point of ingestion to the Kinesis delivery stream it would more easy to control.
What would you suggest?