S3 Backup

These backup scripts are based on work done by TRIa

This backup script is compressing folders to desired sizes and encrypt them on the client side.
It requires s3cmd package and zsh shell. Make sure to set the backup to archive S3 files to Glacier withing first hours so you pay much less.

Script 1
#!/bin/zsh
#

date=$(date +"%y-%m-%d")
date +"%y-%m-%d"+"%H-%M"

bkpdir=/home/user/Backups

homedir=/home/user
cd $homedir

if ls $bkpdir/$date.thunderbird.tar.gz.* 1> /dev/null 2>&1; then
echo "Thunderbird backup exists"
else
echo "Thunderbird backup does not exist"
echo "Backing up Thunderbird directory."
tar czf - .thunderbird | split -b 400MB - $bkpdir/$date.thunderbird.tar.gz.
fi

if ls $bkpdir/$date.mozilla.tar.gz.* 1> /dev/null 2>&1; then
echo "Mozilla FF backup exists"
else
echo "Mozilla FF backup does not exist"
echo "Backing up hidden Mozilla directory."
tar czf - .mozilla | split -b 400MB - $bkpdir/$date.mozilla.tar.gz.
fi

if ls $bkpdir/$date.fonts.tar.gz.* 1> /dev/null 2>&1; then
echo "Fonts backup exists"
else
echo "Fonts backup does not exist"
echo "Backing up hidden fonts directory."
tar czf - .fonts | split -b 400MB - $bkpdir/$date.fonts.tar.gz.
fi

if ls $bkpdir/$date.gnupg.tar.gz.* 1> /dev/null 2>&1; then
echo "GnuPG backup exists"
else
echo "GnuPG backup does not exist"
echo "Backing up GnuPG directory."
tar czf - .gnupg | split -b 400MB - $bkpdir/$date.gnupg.tar.gz.
fi

if ls $bkpdir/$date.etc.tar.gz.* 1> /dev/null 2>&1; then
echo "ETC backup exists"
else
echo "ETC backup does not exist"
echo "Backing up ETC"
cd /etc
tar czf - . | split -b 400MB - $bkpdir/$date.etc.tar.gz.
fi

if ls $bkpdir/$date.packages.txt 1> /dev/null 2>&1; then
echo "List of packages backup exists"
else
echo "Backup of list of packages does not exist"
echo "Backing up the list of installed packages."
#yum list installed > $bkpdir/$date.packages.txt
pacman -Qe > $bkpdir/$date.arch-packages.txt
fi
echo "COMPLETED"

Script 2
#!/bin/zsh
#

date=$(date +"%y-%m-%d")
date +"%y-%m-%d"+"%H-%M"
bkpdir=/home/user/Backups

SOURCE=(
"/home/user/dir1"
"/home/user/dir2"
)

for i in "${SOURCE[@]}"
do
cd $i
dir=${PWD##*/}

echo "Processing $i directory."
tar czf - . | split --bytes=400MB - $bkpdir/$date.$dir.bkp.tar.gz.
echo "Archive $dir.bkp.tar.gz has been created."

done
echo "COMPLETED"

Script 3
#!/bin/zsh
#

# Note, to pull a file from s3 use "s3cmd get s://bucket/file destinationfile"
# You must have the proper .s3cfg file in place to decrypt the file.

# You may also use "gpg encryptedfile" and supply the encryption code if you download
# from the web interface. Good luck.

# The bucket should be set to transfer to Glacier. To retreive, you need to initiate a
# retrieval request from the s3 web interface. To retreieve and entire folder, there is a
# windows program called S3 Browser that can transfer entire folders out of Glacier.

# Define the folders of files to be backed up in SOURCE
SOURCE=(
"/home/michael/Backups"
)

IFS=$(echo -en "\n\b")
logFile=/log/s3backup.log
bucket=vvegabucket
s3cfg=/config/location
touch $logFile
echo Finding files and performing backup: Please wait...

# for loop to go through each item of array SOURCE which should contain the
# directories to be backed up

for i in "${SOURCE[@]}"
do

# nested for loop to run find command on each directory in SOURCE

for x in `find $i`
do
# x is each file or dir found by 'find'. if statement determines if it is a regular file

if [ -f "$x" ]
then
# create a hash to mark the time and date of the file being backed up to compare later for
# incremental backups

fileSize=`stat -c %s $x`
modTime=`stat -c %Y $x`
myHash=`echo $x $fileSize $modTime | sha1sum`

# If statement to see if the hash is found in log, meaning it is already backed up.
# If not found proceed to backup

if ! grep -q $myHash $logFile
then
echo Currently uploading $x

# s3cmd command to put an encrypted file in the s3 bucket
# s3out var should capture anything in stderr in case of file transfer error or some other
# problem. If s3out is blank, the transfer occurred without incident. if an error occurs
# no output is written to the log file but output is written to an error log and s3out is
# written to the screen.

s3out=$(s3cmd -c $s3cfg -e put $x s3://$bucket 2>&1 > /dev/null)

if [ "$s3out" = "" ]
then
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
else
# s3out had content, but was possibly a warning and not an error.. Checking to see if
# there exist an upload file within the last 2 minutes. If so, the file will be considered
# uploaded. Two minutes is to account for variance between local and remote time signatures.

date1=$(date --date="$(s3cmd -c $s3cfg ls s3://$bucket | awk '{print $1 " " $2 " +0000"}')" +%s)
date2=$(date +%s)

datediff=$($date2-$date1)

if [[ $datediff -ge -1200 ]] && [[ $datediff -le 1200 ]]
then
echo There was a possible error but the time of the uploaded file was written within
echo the last 20 minutes. File will be considered uploaded and recorded as such.
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
echo `date`: $x had warnings but seemed to be successfully uploaded and was logged to main log file >> $logFile.err
else
echo $s3out
echo `date`: $s3out >> $logFile.err
fi
echo ------------------------------------------------------------------------------------
fi
fi
fi

done
done

# processed all files in SOURCE. Now upload actual script and file list. They are not encrypted.

#echo Uploading $logFile
#s3cmd -c $s3cfg put $logFile s3://Linux-Backup > /dev/null
#echo Uploading $0
#s3cmd -c $s3cfg put $0 s3://Linux-Backup > /dev/null

echo
echo "Backup to S3 has been completed. You may proceed with life."

Script 4
#!/bin/zsh
#

date=$(date +"%y-%m-%d")
date +"%y-%m-%d"+"%H-%M"

bkpdir=/home/user/Backups
scrdir=/home/user/scripts

tar -zcvf $bkpdir/$date.scripts.tar.gz $scrdir/

echo "Generating checksums..."
cd $bkpdir
md5sum * > $date.checksums.txt
echo "COMPLETED"

whatever you do or fix please test it afterwards, better however do not fix things that work, wait until they break otherwise feel the wrath of dummy users.