3
votes

I have several hundred GB of backed up folders/files that are stored in an AWS S3 bucket using the "Standard" storage class. I used Arq to back everything up to S3 on a regular basis, automatically.

For a variety of reasons, I chose Standard when I first initiated the backup. However, none of these folders/files need any frequent access any longer, and won't in the future. The Glacier storage class is perfectly suitable now.

I'm aware I can use Lifecycle rules to have AWS automatically scan the bucket every day for files matching certain rules and move them to Glacier. However, I don't really have any need for that -- I'd rather just take the ENTIRE bucket right now, all of it, and move it to Glacier. Just be done with it so that all existing and future files in that bucket are stored in Glacier.

How can I do this? I would greatly prefer to do this through the console or CLI rather than developing a script to use the API. That said, if there's no other way and you are aware of an existing script somewhere, I'm not opposed to it.

Within the S3 console for the bucket, the "Change Storage Class" menu item only provides options for Standard, Standard-IA, One Zone-IA, and Reduced Redundancy.

If you're going to suggest something that requires the CLI, I'd appreciate explicit commands and syntax, if you don't mind -- I'm not familiar enough with the CLI.

2

2 Answers

2
votes

There is a Glacier API you could integrate with: https://docs.aws.amazon.com/amazonglacier/latest/dev/amazon-glacier-api.html but that's a lot of work just to archive one bucket! As @michael-sqlbot mentions below, what you are attempting to achieve is not what this API is really designed for so not a good choice

If you look at previous questions on here about S3/Glacier and moving files between them, most users recommend using lifecycle rules. For example: Move files between amazon S3 to Glacier and vice versa programatically using API

You can setup a lifecycle rule so that it transitions the files from S3 to Glacier 1 day after its creation - so in your case, after you create the rule and it evaluates, it will move all files to Glacier. For example, see this guide (and in particular point 5b): https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html

0
votes

This answer likely isn't an appropriate answer for everyone, but depending on your needs, it certainly is an option.

It sounds absurd, but possibly the easiest thing is to just download them out of S3 and then reupload them. This of course only works if there are a handful of them or else you need to write a script.

aws s3 cp s3://[s3-bucket]/[filenames.ext]

aws s3 rm s3://[s3-bucket]/[filenames.ext]

aws s3 cp [filenames.ext] s3://[s3-bucket] --storage-class DEEP_ARCHIVE

Repeat on additional files as necessary.

Of course keep in mind, this could come with transfer in and out fees. For me, this wasn't a big deal.