Use Amazon S3 for quick-and-dirty MySQL backups

I manage a few basic web applications, and recently had to make a potentially breaking database change. I thought about just dumping out the database to a .sql file so I could restore it later, but then figured I might as well kill two birds with one stone, and solve the problem for good.

The server is a Debian box. I set up a shell script that dumps out a MySQL database, compresses it, and pushes it to Amazon S3. It’s a pretty straightforward setup.

On S3’s side I created a bucket (bucket-name) in the US Standard, then created an IAM user, saved the Access Key and Secret Key, and attached the following Inline Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name/*"
            ]
        }
    ]
}

That sets up the user to work specifically with one bucket in your S3 environment. I then installed the s3cmd commandline utility:

sudo apt-get install s3cmd

You need to configure it, which is done interactively:

s3cmd --configure

Once it’s configured, it just takes a basic shell script:

#!/bin/bash

now=$(date +"%Y-%m-%d")
_file="mysql-backup-$now.tar.gz"

# Dump the file
mysqldump -u username -p password database > ~/backup.sql

# Compress
tar -czvf "$_file" backup.sql

# Push to Amazon
s3cmd put "$_file" s3://bucket-name/"$_file"

# Cleanup
rm "$_file"
rm ~/backup.sql

Set that to chmod +x and you’re good to go! Set that up to run via cron once a day, and maybe configure S3 to only retain the last 28 days of daily backups, and there you have it – instant backup solution, just add water :)

Preparing a new Debian 14.04.2 Server for Laravel 5.1

You’re gonna want to put on your admin hat! This guide is written for a new Ubuntu 14.04.2 LTS droplet built on digitalocean.com. We’ll go for repository versions of all these components. First, this guide is 99% accurate for 14.04.2 and will get you Nginx, MySQL and PHP taken care of. Bonus: Jenkins. I love Jenkins, mainly because it’s a one-install server that lets me schedule automated builds and other deployment tasks via my browser. It’s easy to set up, easy to secure, and once it’s set up you may never need to SSH into the server again. Just follow the regular instructions at http://pkg.jenkins-ci.org/debian/ – that will put Jenkins on port 8080, then follow this quick guide to enable some basic security. We’ll need some more software though. This is taken from the list of components that Homestead uses, and all commands are being run as the root user.

apt-get install git redis-server beanstalkd memcached php5-cli php5-mcrypt php5-curl nodejs node-legacyjs npm

An oddity here – you need to manually enable mcrypt before PHP can use it:

php5enmod mcrypt

With that done, install the tools Laravel uses:

npm install -g bower grunt gulp

And last but not least, Composer:

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer

That will add “composer” as a CLI command in one go. So we’ve got a Laravel-capable server now. Let’s get a basic CI pipeline going using Jenkins. We’ll set up something simple that will pull changes from our develop branch into a develop directory on the server, and run any migrations or updates that are required. This is not the best way to handle production releases (you’ll want to use versioned-everything on the server), but it’ll do for a quick setup. Before getting started – database access. Default Laravel 5.1 applications are configured to connect as ‘forge’, to localhost with no password, and use the ‘forge’ database. You might have different details in your project, in which case you’ll need to configure MySQL with matching username, password and database. We’ll just set it up to handle Laravel’s system defaults here:

mysql -uroot -p
Enter password:
create database forge;
grant all on forge.* to 'forge'@'localhost' identified by '';

There’s a little more server-side prep to do – folders and SSH keys. Prepare the folders by ensuring the jenkins user has write access. I prefer having my projects live in a root folder, with symlinks out to the html root:

mkdir -p /projects/project-1/develop
cd /projects
chown -R jenkins:jenkins *

Now set up Jenkins’ SSH key. On the server, run:

su jenkins
ssh-keygen

Accept all the defaults. That should create a key in /var/lib/jenkins/.ssh/. Run:

cat /var/lib/jenkins/.ssh/id_rsa.pub

Copy the public key. Save it somewhere, if you’re going to use this same server to pull and compile multiple repositories. Right now you’ll just want to add it as a Deployment key under the repository Settings in bitbucket. Finally, we’ll set up the symlink. It’ll be broken until the project itself is checked out for the first time. Start by dropping the default html folder:

cd /var/www
rm -rf html

Now create a symlink that points to where the public subfolder will be:

ln -s /projects/project-1/develop/public/ html

Finally, to Jenkins. We’re going to use the most basic job possible – a straightforward series of commandline instructions. Create a new Freestyle project and give it a name. Under the Build heading, click Add Build Step -> Execute Shell. In there, we’ll just put the commands we would have run ourselves:

# Move to working directory
cd /projects/project-1/develop;

# If artisan exists and composer has run, bring the app down for maintenance
if [ -e artisan ] && [ -d vendor ]; then
    php artisan down
fi;

# If we've already cloned, update - otherwise clone from scratch
if [ -e artisan ]; then
    git pull
else
    git clone -b develop git@bitbucket.org:woganmay/project-1.git .;
fi;

# Ensure storage folders are writeable
chmod -R 0777 storage/;

# Run updates and migrations
composer update;
php artisan migrate --force;

# Bring us back out of maintenance mode
php artisan up;

Save the job config and hit Build Now. This will schedule a new job, and then run it immediately, showing the status under the Build History widget. You can view the output of the job as it runs by hovering over the little down arrow by the light, and clicking Console Output: 2015-06-27 21_26_08-planweek.com develop [Jenkins] This will show the raw output from all your commands. It’ll take a while the first time around, as it has to download all the composer packages for the first time. Subsequent runs will be a lot faster as it installs everything from cache. When it’s done you should see something like this towards the bottom of the Console Output: 2015-06-27 21_43_21-planweek.com develop #12 Console [Jenkins] That ‘Finished: SUCCESS’ is what we’re after. That means the project deployed correctly, and you should now be able to browse to it via HTTP. Future deployments can be kicked off by logging into Jenkins and clicking Build Now.

Securing a default Jenkins install

Jenkins has some insanely granular permission controls, but when you install it for the first time, the default is to allow 100% public access to everything. Obviously you’ll want to fix that.

First, click Manage Jenkins, then Configure Global Security. Tick Enable security, then select Jenkins’ own user database, and ensure Allow users to sign up is ticked, and save. We’re leaving Authorization on Anyone can do anything for now.

This will expose the sign up link on the top right:

2015-06-27 20_21_24-Manage Jenkins [Jenkins]

Sign up to create a login for yourself. You’ll be authenticated immediately. Go back to Manage Jenkins -> Configure Global Security, and flip the Authorization switch to Logged-in users can do anything. Save.

So that gives us a sane default – we have basic login control. You might want to disable Allow users to sign up if this instance of Jenkins is internet-facing.

We can go one additional step further – flip Authorization to Matrix-based security. Add your own username to the matrix, select Administer, and save. What this will do is remove ALL the rights Anonymous users have, so where the public might have seen the People list, or the Jobs list, now all an Anonymous user will get is a login prompt.

Youth Day 2016

Quoting the official Government of South Africa website:

In 1975 protests started in African schools after a directive from the then Bantu Education Department that Afrikaans had to be used on an equal basis with English as a language of instruction in secondary schools. The issue, however, was not so much the Afrikaans as the whole system of Bantu education which was characterised by separate schools and universities, poor facilities, overcrowded classrooms and inadequately trained teachers.

So really, not much has changed in the last 40 years, then!

Just something to remember on this, the 39th anniversary of the Soweto Day protests.

Recover Windows 8 Key from BIOS using Linux

Machines that ship with Windows 8 will have the product key written to the ACPI table on your hardware – so even if you reformat, the product key remains embedded in your machine. There are quite a few ways to recover this key if you’re running Windows, but in my case, I had reformatted and installed Ubuntu.

Turns out it’s not a problem. Install the free acpidump utility:

sudo apt-get install acpidump

Then run it with sudo:

sudo acpidump

Then check the output for a block starting with MSDM:

key

That blurred block there looks very much like a product key to me!

Change DNS Servers on the Telkom 921VNX PACE Router

The VDSL router that Telkom ships comes with a web-based admin panel, which for some reason will not let you configure the DNS servers that your LAN devices use. PuTTY to the rescue!

Use PuTTY (or whatever else) to SSH into your router – usually 10.0.0.2.

Username: admin
Password: nology*/

You’ll get a very minimal CLI. To update the config related to your LAN, do:

cd LANDevice
cd 1
cd HostConfig
ls

This will bring up a bunch of settings related to your LAN. One thing you’re looking for is the Min and Max address config items – this should define the IP range your devices are on:

LANDevice_1_HostConfig_MinAddress = [10.0.0.3]
LANDevice_1_HostConfig_MaxAddress = [10.0.0.254]

If your computer is not within that range, you may want to check LANDevice subdirectories 2 and on. If it matches though, you’ll see an entry you can’t edit via the web interface:

LANDevice_1_HostConfig_DNSServers = []

To set, for instance, Google Public DNS, you’d do:

set DNSServers 8.8.8.8,8.8.4.4
fcommit

That will result in your router cycling something, and your connection will go down for a bit. When it comes back up, the router will use those DNS servers to handle queries coming from the LAN. If that doesn’t work, try rebooting the router (just ‘reboot’ from the CLI).

Load test on a Mecer 650VA UPS

So it’s loadshedding season, and I’ve decided to learn a bit more about the UPS game. That’s the dream, isn’t it? Being able to work (or play, or watch) through the loadshedding, completely uninterrupted?

To that end, I decided to start small, and get an idea of what UPSes are capable of. I bought the smallest one I could from newly-launched Powerfully.co.za – a Mecer 650VA Offline UPS. The experience there was stellar: I placed the order, paid online, and received it (in Gardens, Cape Town) the very next day. It even had some charge in it, and I got to annoy the office with the every-10-second beep it makes when there’s no AC power. Note: The Mecer UPSes use kettle plugs, and the one I got did not ship with a plug I could stick into a wall. Not an issue for most computer users (everyone has a kettle plug lying around these days), but if you have none, you may one to order one along with the UPS.

I got an Offline UPS, and if you’re browsing around, you’ll note there are Offline (or Line Interactive) and Online UPSes. The difference (other than cost) is the changeover time. Offline UPSes use a mechanical switch to flip you over to UPS power in the event of an outage, incurring a delay anywhere from 5-25ms. That’s fine for most consumer electronics, but if you have something that absolutely cannot go down, an Online UPS is a better bet – it doesn’t do any switchover, so there’s no delay.

My setup is pretty basic. I know that, with a 12V 7Ah battery, there’s no way to feed my beast of a desktop PC (650w power supply!) for any appreciable length of time, so instead, this is my setup:

  • Mecer 650VA UPS connected to the wall
  • Power strip plugged into the back (it has a regular 3-pin socket)

The router uses minimal power, in the 1.2A range – and the Chromecast uses about the same. The TV is a different story. At peak consumption it can draw up to 100W, but I managed to cut that in half using the energy-saving mode (Medium) on the TV. You can go to Maximum, but then it drops the screen brightness by about 80%.

With this setup, my WiFi is guaranteed – the Telkom line does not go down when there’s loadshedding, and all my gadgets connect over WiFi, so at least I have uninterrupted internet. The TV was just to give it a proper stress test. That test was monitored with the supplied software – I installed it on a Windows laptop and plugged the USB cable into the UPS to get some information from it. It’s pretty, simplistic, but ultimately not very useful:

ups display

The UPS comes with a very annoying audible alarm, which sounds every 10 seconds when the Eskom AC power is down. Thankfully, you can disable that (in the Real Time section), and that’s about all the software is good for. The battery indicator is notoriously unreliable, as I’ll get to in a minute.

I set up a stress test by leaving the UPS plugged in and charging for 24 hours. Then I cast a Youtube HD clip to my TV (StarCraft 2 finals – constant audio and visual output), ensuring that all 3 devices were active. Then I flipped the switch on the wall, and it went into battery mode with an audible click. None of my devices noticed the drop – WiFi stayed up, TV stayed on, Chromecast kept streaming.

These were the readings over the duration of the test:

  • Start: 100% capacity
  • 2 minutes: 68% capacity
  • 3 minutes: 72% capacity ( go figure )
  • 30 minutes: 48% capacity
  • 38 minutes: 15% capacity
  • 41 minutes: 2% capacity, battery died

The reporting was wildly inconsistent. The battery “drained” 30% in 2 minutes, then took 30 minutes to drain another 30%, and then finished off the rest in about 10 minutes. The only reliable indicator was the lights – the red light comes on, and the yellow light flashes frantically when the battery is about 2 minutes away from death.

This does mean, though, that a R670 UPS was able to keep a 42″ LED TV operational, streaming HD content, for a full 40 minutes. Which is not bad for a device meant mainly to give you enough time to safely power down your computer.

In reality, all it’s going to power is my router, TV in standby mode (0.3W draw – neglegible) and an idle Chromecast. It should be able to do that for far more than 40 minutes, which will be my next test.