6
votes

I have a bucket that has a short lifecycle rule, everything older than 7 days gets deleted. The files that are added have dynamically generated names.

There is one file in the bucket that I would like to exclude from this rule, is there a way to exclude this file from the rule so it is never deleted?

2
If your file name that need to be deleted has a certain prefix that is different from the one you wish not to be deleted, then you can define lifecycle rule based on the prefix.Rajesh

2 Answers

6
votes

There is not a way to exclude objects from rules that match them. Most likely, you will need to rearrange your objects using prefixes that meet your needs.

There is a hack... which would involve copying the file into itself frequently enough that it never ages enough to match the rule, but that is obviously delicate. The S3 PUT+Copy operation does allow an object to be copied on top of itself non-destructively without downloading and re-uploading, and this would reset the expiration timer.

But most likely a better solution is to prefix your random filenames with a few static characters. The S3 partition splitting implementation (the way S3 handles bucket capacity scaling) can apparently work just as well with with a static prefix (e.g. images/) followed by random characters as it can with entirely random keys.

1
votes

If the file is small enough so that it doesn't matter to pay for Glacier and S3 storage, you could also initiate a restore and set Days to a very high number.