Category Archives: Cloud

Amazon Lumberyard and GameLift

We’re excited to introduce Amazon Lumberyard and Amazon GameLift to game developers using AWS.

Amazon Lumberyard is a free, cross-platform, 3D game engine for developers to create the highest-quality games, connect their games to the vast compute and storage of the AWS Cloud, and engage fans on Twitch. This game engine helps developers build beautiful worlds, make realistic characters, and create stunning real-time effects.

Amazon Lumberyard is available for download in beta for PC and console game developers, with mobile and virtual reality (VR) platforms coming soon. Amazon Lumberyard is free to use, including source. There are no seat fees, subscription fees, or requirements to share revenue. Developers pay standard AWS fees for any AWS services they choose to use. Download the game engine here.

AWS is also releasing Amazon GameLift, a new service for deploying, operating, and scaling session-based multiplayer games, reducing the time required to create multiplayer back-ends from thousands of hours to just minutes. Learn more here.

With Amazon GameLift and Amazon Lumberyard, developers can create multiplayer back-ends with less effort, technical risk, and time delays that often cause developers to cut multiplayer features from their games.

Symantec.cloud Message Labs problems this morning

In the interim a brief breakdown of the issue is at 08:45 UK time this morning (21/08/14) our engineers identified mail queues building across our global mail infrastructure.

We then put up an alert on ClientNet at approximately 09:00am and added the phone message shortly afterwards.

Our engineers worked on this as their highest priority and at approximately 10:00am mail service was restored so the mail queues that had built were beginning to drain (be delivered). Whilst our engineers were restoring towers some mails would have started to be delivered prior to 10:00am.

This issue would have affected all clients that send mail via Symantec infrastructure.

Service has now been restored and all mail queues have now been delivered

Glacier: where are the files?

You may wonder why S3 shows files as Glacier class, but when you log in to Glacier it is empty.
This is the explanation from Amazon:

GLACIER storage class objects are visible and available only through Amazon S3, not through Amazon Glacier.

Amazon S3 stores the archived objects in Amazon Glacier; however, these are Amazon S3 objects, and you can access them only by using the Amazon S3 console or the API. You cannot access the archived objects through the Amazon Glacier console or the API.

Cloud

For last couple of years we hear a lot about cloud. For GNU/Linux it was natural transition, but for proprietary software vendors it is a life belt. They basically cannot keep up with the software development pace set by Open Source community that fuels today’s Internet technologies.
You can see it clearly at Microsoft where they abandon their regular services and start promoting cloud solutions for everything and replacing one time payments with monthly subscription. This is what becomes their flag product.

The day will come

when someone turn on his PC hoping to see Windows but there will be GNU/Linux booting up instead.
Of course this is the case even today, but from my understanding Windows will eventually be completely replaced by GNU or other open source packages.
At the moment many Windows compatible tools are actually open source – the most reliable and safe to install comparing to build in ones or freeware or shareware or abandonware.

For instance right now we see fully functional open source implementation of Microsoft Exchange and fully functional alternative to Windows SBS.
The time will come when all this will morph to open source as Microsoft is no longer interested in regular servers, dedicated to cloud solutions or rather retreating to the cloud.

Reliable incremental backup to S3->Glacier

Use this script:
#!/bin/bash
#

# Note, to pull a file from s3 use "s3cmd get s://bucket/file destinationfile"
# You must have the proper .s3cfg file in place to decrypt the file.

# You may also use "gpg encryptedfile" and supply the encryption code if you download
# from the web interface. Good luck.

# The bucket should be set to transfer to Glacier. To retreive, you need to initiate a
# retrieval request from the s3 web interface. To retreieve and entire folder, there is a
# windows program called S3 Browser that can transfer entire folders out of Glacier.

# Define the folders of files to be backed up in SOURCE
SOURCE=(
"/home/owner/Documents"
"/home/owner/Pictures"
"/mnt/files/Photographs"
"/mnt/files/Documents"
"/mnt/files/Home Movies"
)

IFS=$(echo -en "\n\b")
logFile=/mnt/files/scripts/backupmanifest.log
bucket=MyBucket
s3cfg=/home/owner/.s3cfgtouch $logFileecho Finding files and performing backup: Please wait...

# for loop to go through each item of array SOURCE which should contain the
# directories to be backed up

for i in "${SOURCE[@]}"
do

# nested for loop to run find command on each directory in SOURCE

for x in `find $i`
do
# x is each file or dir found by 'find'. if statement determines if it is a regular file

if [ -f "$x" ]
then
# create a hash to mark the time and date of the file being backed up to compare later for
# incremental backups

fileSize=`stat -c %s $x`
modTime=`stat -c %Y $x`
myHash=`echo $x $fileSize $modTime | sha1sum`

# If statement to see if the hash is found in log, meaning it is already backed up.
# If not found proceed to backup

if ! grep -q $myHash $logFile
then
echo Currently uploading $x

# s3cmd command to put an encrypted file in the s3 bucket
# s3out var should capture anything in stderr in case of file transfer error or some other
# problem. If s3out is blank, the transfer occurred without incident. if an error occurs
# no output is written to the log file but output is written to an error log and s3out is
# written to the screen.

s3out=$(s3cmd -c $s3cfg -e put $x s3://$bucket/$HOSTNAME$x 2>&1 > /dev/null)
if [ "$s3out" = "" ]
then
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
else
# s3out had content, but was possibly a warning and not an error.. Checking to see if
# there exist an upload file within the last 2 minutes. If so, the file will be considered
# uploaded. Two minutes is to account for variance between local and remote time signatures.

date1=$(date --date="$(s3cmd ls s3://$bucket/$HOSTNAME$x | awk '{print $1 " " $2 " +0000"}')" +%s)
date2=$(date +%s)

datediff=$(($date2-$date1))

if [[ $datediff -ge -120 ]] && [[ $datediff -le 120 ]]
then
echo There was a possible error but the time of the uploaded file was written within
echo the last 2 minutes. File will be considered uploaded and recorded as such.
echo $x :///: $fileSize :///: $modTime :///: $myHash >> $logFile
echo `date`: $x had warnings but seemed to be successfully uploaded and was logged to main log file >> $logFile.err
else
echo $s3out
echo `date`: $s3out >> $logFile.err
fi
echo ------------------------------------------------------------------------------------
fi
fi
fi

done
done

# processed all files in SOURCE. Now upload actual script and file list. They are not encrypted.

echo Uploading $logFile
s3cmd put $logFile s3://Linux-Backup > /dev/null
echo Uploading $0
s3cmd put $0 s3://Linux-Backup > /dev/null

echo
echo Backup to S3 has been completed. You may proceed with life.

Thanks to TRIA Technology

I execute this script in ZSH instead and had to replace:
datediff=$(($date2-$date1))
with
datediff=$(expr $date2 - $date1)
otherwise the script has problems on some files.

Because the encryption does not give any special extension to encrypted files please use this script to batch decrypt them:
for i in ./*; do
/usr/bin/gpg --batch --passphrase-fd 3 --decrypt $i 3 $i.decoded ;
done

Can also use –passphrase or –passphrase-file instead.

HERE you can calculate your monthly costs.

Reduced redundancy explained here and there.