I am collecting log data using filebeat 7.x but I am facing a problem that the log size is so big (100GB per day).
Now I am thinking how we can collect the error level log from the source file. What is the best way to do this?
I am using filebeat to send logs to elasticsearch which is in Kubernetes cluster, my concern here is should I must use kafka and logstash to define the rule?
Please find below the filebeat config file being used:
{
"filebeat.yml": "filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: \"/var/log/containers/\"
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
"
}