1
votes

I am running Filebeats on a number of containerised services to collect logs and send them through to a logstash(v5.3.1) container which in turn uploads them to aws S3.

I have enabled server side encryption with default kms to encrypt logs at rest and it works fine. however, when I add a bucket policy that denies access if kms sse is not enabled logstash fails with the error: ERROR logstash.outputs.s3 - Error validating bucket write permissions! {:message=>"Access Denied", :class=>"Aws::S3::Errors::AccessDenied"}

as soon as I remove the deny part from the policy it works again.

Logstash output config looks like:

 output {
      s3{
        server_side_encryption           => true
        server_side_encryption_algorithm => "aws:kms"
        region => "XXXXXX"
        bucket => "XXXXX"
        prefix => "XXXXXXXX"
      }
    }

S3 Bucket Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": “Logging bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::XXXXXXXXXXX:root"
                ]
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::bucket_name_12343456/*”
        },
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::bucket_name_12343456/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "aws:kms"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::bucket_name_12343456/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}

The odd thing is when I remove the Deny* part of the policy above, it works: files are written to S3 bucket and they are marked: Server side encryption AWS-KMS KMS key ID XXXXXXXXXXX

Finally a test I made was to simply call aws cp with encryption and this call go through successfully!

1

1 Answers

0
votes

I had a similar problem. I believe this happens because during startup, logstash attempts to verify that it has sufficient permissions to write to the bucket by placing a test file at the root of the bucket (ignoring the encryption setting) and subsequently deleting it again.

You can disable this behaviour by adding validate_credentials_on_root_bucket => false to your output config. For me, this resolved the issue.