3
votes

What is the best way to automate (daily) snapshots of my EBS volumes (2) and manage them.

By 'manage' I mean that I am looking for a script that will not only create daily backups (I am guessing a cron job will be involved) but that will also delete snapshots that are older than x days so as to avoid excessive data usage.

I believe that such scripts do exist somewhere out there but I cant seem to pin one down.

Ty

4

4 Answers

3
votes

I've used a similar open-source tool at http://awsmissingtools.com - the "ec2-automate-backup" tool when run as follows "ec2-automate-backup -s tag -t Backup-true -k 14 -p" will backup all EBS volumes with the tag Backup=true and will set these snapshots to be removed after -k days - you could keep snapshots for 14 days by using -k 14 or keep snapshots for an entire year by using -k 365.

1
votes

I'm sure there are other implementation of this kind of script but here's mine:

http://www.capsunlock.net/2009/10/deleting-old-ebs-snapshots.html

0
votes

I was running into the same problem. As a result I create a special script. If you have PHP installed on your server here is what you can do.

This script will not only create backup with interval you set, but delete snapshots that are older that indicated.

  1. Open SSH connection to your server.
  2. Navigate to folder

    $ cd /usr/local/
    
  3. Clon this gist into ec2 folder

    $ git clone https://gist.github.com/9738785.git ec2
    
  4. Go to that folder

    $ cd ec2
    
  5. Make backup.php executable

    $ chmod +x backup.php
    
  6. Open releases of the AWS PHP SDK github project and copy URL of aws.zip button. Now download it into your server.

    $ wget https://github.com/aws/aws-sdk-php/releases/download/2.6.0/aws.zip
    
  7. Unzip this file into aws directory.

    $ unzip aws.zip -d aws 
    
  8. Edit backup.php php file and set all settings in line 5-12

    $dryrun     = FALSE;
    $interval   = '24 hours';
    $keep_for   = '10 Days';
    $volumes    = array('vol-********');
    $api_key    = '*********************';
    $api_secret = '****************************************';
    $ec2_region = 'us-east-1';
    $snap_descr = "Daily backup";
    
  9. Test it. Run this script

    $ ./backup.php
    

    Test is snapshot was created.

  10. If everything is ok just add cronjob.

    * 23 * * * /usr/local/ec2/backup.php
    
0
votes

I came across with many people looking for a tool to administrate the EBS snapshots. I found several tools in Internet but they were just scripts and incomplete solutions. Finally I decided to create a program more flexible, centralized and easy to administrate.

The idea is to have a centralized program to rule all the EBS snapshots (local to the instance or remotes)

I have created a small Perl program, https://github.com/sciclon/EBS_Snapshots

Some features: * Program runs in daemon mode or script mode (crontab)

  • You can chose only local attached volumes or remotes as well

  • You can define log file

  • You can define for each volume quantity of snapshots

  • You can define for each volume the frequency among them

  • Frequency and quantity will work like a "round-robin" when it reaches the limit removing the oldest snapshot.

  • you can readjust in one step the quantity I mean if you have 6 snapshots and you modify the quantity in 3 the process will readjust it automatically.

  • You can define a "prescript" execution, You can add your code to execute before executing the snapshot, for example you would like to try to umount the volume or stop some service, or maybe to check the instance load. The parent process will wait for the exit code, "0" means success, you can define if continue or not depending on the exit code.

    • You can define a "postscript" execution to execute any scrip after taking the snapshot (for example a email telling you about it)

    • You can add "Protected Snapshots" to skip the snapshot you define, I mean they will be in "read only" and they will never been erased.

    you can reconfigure the script "on the fly" when it is running in daemon mode, the script accepts signals and IPC.

    It has a "local-cache" to avoid requesting the API several times. You can add or modify any configuration in the config file and reload without killing the process.