Reliable incremental backup to S3->Glacier

Use this script:
#!/bin/bash
#

# Note, to pull a file from s3 use "s3cmd get s://bucket/file destinationfile"
# You must have the proper .s3cfg file in place to decrypt the file.

# You may also use "gpg encryptedfile" and supply the encryption code if you download
# from the web interface. Good luck.

# The bucket should be set to transfer to Glacier. To retreive, you need to initiate a
# retrieval request from the s3 web interface. To retreieve and entire folder, there is a
# windows program called S3 Browser that can transfer entire folders out of Glacier.

# Define the folders of files to be backed up in SOURCE
SOURCE=(
"/home/owner/Documents"
"/home/owner/Pictures"
"/mnt/files/Photographs"
"/mnt/files/Documents"
"/mnt/files/Home Movies"
)

IFS=$(echo -en "\n\b")
logFile=/mnt/files/scripts/backupmanifest.log
bucket=MyBucket
s3cfg=/home/owner/.s3cfgtouch $logFileecho Finding files and performing backup: Please wait...

# for loop to go through each item of array SOURCE which should contain the
# directories to be backed up

for i in "${SOURCE[@]}"
do

# nested for loop to run find command on each directory in SOURCE

for x in `find $i`
do
# x is each file or dir found by 'find'. if statement determines if it is a regular file

if [ -f "$x" ]
then
# create a hash to mark the time and date of the file being backed up to compare later for
# incremental backups

fileSize=`stat -c %s $x`
modTime=`stat -c %Y $x`
myHash=`echo $x $fileSize $modTime | sha1sum`

# If statement to see if the hash is found in log, meaning it is already backed up.
# If not found proceed to backup

if ! grep -q $myHash $logFile
then
echo Currently uploading $x

# s3cmd command to put an encrypted file in the s3 bucket
# s3out var should capture anything in stderr in case of file transfer error or some other
# problem. If s3out is blank, the transfer occurred without incident. if an error occurs
# no output is written to the log file but output is written to an error log and s3out is
# written to the screen.

s3out=$(s3cmd -c $s3cfg -e put $x s3://$bucket/$HOSTNAME$x 2>&1 > /dev/null)
if [ "$s3out" = "" ]
then
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
else
# s3out had content, but was possibly a warning and not an error.. Checking to see if
# there exist an upload file within the last 2 minutes. If so, the file will be considered
# uploaded. Two minutes is to account for variance between local and remote time signatures.

date1=$(date --date="$(s3cmd ls s3://$bucket/$HOSTNAME$x | awk '{print $1 " " $2 " +0000"}')" +%s)
date2=$(date +%s)

datediff=$(($date2-$date1))

if [[ $datediff -ge -120 ]] && [[ $datediff -le 120 ]]
then
echo There was a possible error but the time of the uploaded file was written within
echo the last 2 minutes. File will be considered uploaded and recorded as such.
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
echo `date`: $x had warnings but seemed to be successfully uploaded and was logged to main log file >> $logFile.err
else
echo $s3out
echo `date`: $s3out >> $logFile.err
fi
echo ------------------------------------------------------------------------------------
fi
fi
fi

done
done

# processed all files in SOURCE. Now upload actual script and file list. They are not encrypted.

echo Uploading $logFile
s3cmd put $logFile s3://Linux-Backup > /dev/null
echo Uploading $0
s3cmd put $0 s3://Linux-Backup > /dev/null

echo
echo Backup to S3 has been completed. You may proceed with life.

Thanks to TRIA Technology

I execute this script in ZSH instead and had to replace:
datediff=$(($date2-$date1))
with
datediff=$(expr $date2 - $date1)
otherwise the script has problems on some files.

Because the encryption does not give any special extension to encrypted files please use this script to batch decrypt them:
for i in ./*; do
/usr/bin/gpg --batch --passphrase-fd 3 --decrypt $i 3 $i.decoded ;
done

Can also use –passphrase or –passphrase-file instead.

HERE you can calculate your monthly costs.

Reduced redundancy explained here and there.

noatime

The use of noatime, nodiratime or relatime can improve drive performance. Linux by default uses atime, which keeps a record (writes to the drive) every time it reads anything. This is more purposeful when Linux is used for servers; it doesn’t have much value for desktop use. The worst thing about the default atime option is that even reading a file from the page cache (reading from memory instead of the drive) will still result in a write! Using the noatime option fully disables writing file access times to the drive every time you read a file. This works well for almost all applications.
Source: Arch Wiki

Video LAN ain’t easy on Fedora 17

To be able to install this famous Open Source video player you nedd to add these rpositories first.
As you see by default Fedora Project community is enforcing their own set of usefull applications
yum localinstall --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm
yum localinstall --nogpgcheck http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm

Drivers

In the past it was hard to install drivers under Linux system for specific device. It has usually required some sort of pre-configuration. Now it is other way round. Linux systems are installing all sorts of drivers without problems, but Windows is lost. HP website with graphic drivers is long gone and this is my second day I am attempting to install these damn drivers under Windows when on GNU/Linux everything works flawlessly.