You have a software running on an EC2 instance and simply you want to create a scheduled backup of a file (or files) to S3. Even if it’s not a good backup strategy to use it on the production, it still saves time for POC’s. I need this while I was testing a Django web app with sqlite database. Notice that sqlite is not a database service or a server, it’s just a binary file as a database. So, if you backup the data.db file, you’ve backed up the whole database successfully.

We’ll develop a simple bash script makes the zip of the file/files we want, specifies the file name, stamped with the date/time and finally copies that to the S3 bucket we provided. Then, we’ll put that bash script to the cron of the server. If we set right IAM roles for the bucket, it’ll start working without any noise.

Create a folder to work on, create the bash script file and make it executable, than put the bash script file.

mkdir backup-s3
cd backup-s3/
touch backup-s3.sh
chmod +x backup-s3.sh
nano backup-s3.sh


Feel free to adjust your files/folders to be backed up, and configure the S3 bucket name of course. You can use time functions to prepare the name of the file. You can also search in backup files on S3 Management console using the time variables as prefixes.

Then, append the cron configuration to your root crontab file. Type crontab -e and append this. It works for every 59th minute of every hour. You can adjust it with your needs.

59 * * * * bash /home/ubuntu/backup-s3/backup-s3.sh


Done! You’re ready to go.

Please comment and contribute the blog post using comments section. See you!