Modified Variant Whitespace Template

Variant Whitespace is a nice minimalistic template by Andreas Viklund.

Andreas chose to put the sidebar above the content, which I prefer not to do. Furthermore, as the sidebar was a “float” that came before the content, it caused additional inconveniences. For example, if you had an element with clear: both, it would be pushed below the sidebar. I’ve patched it a bit in order to fix those issues. You can find my modified version here: variant-whitespace.tar.gz

Author: (no author) not defined in authors file

I came across the following error message:

Author: (no author) not defined in authors file

when I tried to import an SVN repository (Open Yahtzee‘s) to Git using git svn, and I specified an authors file (using -A). Indeed, the first commit to SVN (which was done using cvs2svn) had no username for the committer. Apparently, the workaround is to add the following line to your author file.

(no author) = no_author <no_author@no_author>

I also tried doing the same without an email address, but it just didn’t work. It seems Git requires that all authors have an email address.

passha – A hashapass variant

I like the idea of hashapass, but I’m unwilling to use an online tool because I fear that someday it might be compromised. So I wrote my own variant of hashapass. It uses slightly longer passwords and SHA-256 as the hash function.

#! /usr/bin/python

"""
passha.py - Generate passwords from a master password and a parameter.

Based on hashapass (http://hashapass.com)
"""
import hmac
import hashlib

def main(passwd, param):
    hm = hmac.HMAC(passwd, param, hashlib.sha256)
    print hm.digest().encode("base64")[:10]
    
if __name__=="__main__":
    import getpass
    passwd = getpass.getpass()
    param = raw_input("Parameter: ")
    main(passwd, param)

Reinstall GRUB in Ubuntu

My brother asked me to repair his boot loader after he accidentally erased his MBR. This can be done easily via a LiveCD and the command line.

Boot the system using a LiveCD (I used Ubuntu from a USB stick) and do the following:

$ sudo mount /dev/sda /mnt
$ sudo mount --bind /usr/sbin /mnt/usr/sbin
$ sudo mount --bind /usr/lib /mnt/usr/lib
$ sudo mount --bind /dev/ /mnt/dev
$ sudo chroot /mnt

# grub-install /dev/sda

I hope it will be useful for others as well, as the Ubuntu community documentation offers a solution based on Boot-Repair, which seems like overkill to me.

Automated Encrypted Backups to S3 Using Duplicity

This tutorial will hopefully guide you through making automated encrypted backups to Amazon’s S3 using duplicity. It was written as a follow-up to Using Duplicity and Amazon S3 – Notes and Examples, in order to organize all the necessary information into a simple tutorial.

We’ll start by creating a simple wrapper for duplicity:

#! /usr/bin/python
import sys
import os

duplicity_bin = '/usr/bin/duplicity'

env = {
    'AWS_ACCESS_KEY_ID':     'PUT YOUR KEY ID HERE',
    'AWS_SECRET_ACCESS_KEY': 'PUT YOUR SECRET ACCESS KEY HERE',
    'PASSPHRASE':            'PUT ENCRYPTION PASSPHRASE',
}
env.update(os.environ)

os.execve(duplicity_bin, sys.argv, env)

Save this under duplicity-wrapper.py and chmod 0500 it so only you will be able to read and execute it.

Note: You’ll want to write down the passphrase and store it in a safe location (preferably in two separate locations). That way, in case you need to restore the backups, you won’t have useless encrypted files.

Now edit your crontab and add a line like the following:

10 1 * * 0 /path/to/duplicity-wrapper.py /path/to/folder/ s3+http://bucket-name/somefolder &>> ~/log/backups.log

This will create a weekly backup for /path/to/folder. The backup will be encrypted with whatever passphrase you’ve given in the duplicity-wrapper.py. The output of the backup process will be saved in ~/log/backups.log.

You should also run

/path/to/duplicity-wrapper.py full /path/to/folder/ s3+http://bucket-name/somefolder

in order to create full backups. You might want to periodically verify your backups:

/path/to/duplicity-wrapper.py collection-status s3+http://bucket-name/somefolder
/path/to/duplicity-wrapper.py verify s3+http://bucket-name/somefolder /path/to/folder/

to check the status of the backups and verify them.

And last but not least, in case you ever need the backups, you can restore them using:

/path/to/duplicity-wrapper.py restore s3+http://bucket-name/somefolder /path/to/folder/

Security Considerations

As I know some people will comment on saving the encryption passphrase plainly in a file, I will explain my reasoning. I use the above encryption in order to secure my files in case of data leakage from Amazon S3. In order to read my backups, or silently tamper with them, someone will have to get the passphrase from my machine. While this isn’t impossible, I will say it’s unlikely. Furthermore, if someone has access that allows him to read files from my computer, he doesn’t need the backups; he can access the files directly.

I’ve given some thought to making the backups more secure, but it seems you always have to compromise on either automation or incremental backups. But, as I wrote, the current solution seems to me strong enough given the circumstances. Nonetheless, if you’ve got a better solution, it would be nice to hear.

Extracting Data from Akonadi (Kontact)

In older versions of KDE, Kontact used to keep its data in portable formats: iCalendar files for KOrganizer and vCard for KAddressBook. But some time ago, Kontact moved to Akonadi, a more sophisticated backend storage system. By default (at least on my machine), Akonadi uses MySQL (with InnoDB) as the persistent storage. I didn’t consider it thoroughly when moving my data to Gnome, and I got stuck with the data.

To make things worse, somewhere along the update to KDE 4.6, I got some of the data moved to ~/.akonadi.old. Being stuck with the InnoDB tables, I tried the following solutions without much success:

  1. Loading the InnoDB tables into a MySQL server. It didn’t go well; MySQL complained about weird stuff, and I gave up in search of a simpler solution.
  2. I booted an openSUSE virtual machine with KDE and tried loading my old data. Apparently, my ~/.akonadi folder contained nothing interesting, and openSUSE’s KDE 4.6 refused to load the data from ~/.akonadi.old after I renamed it.

So, being upset about Akonadi, I did some grepping and found strings from my contacts and todo lists in the following files:

Binary file .local/share/akonadi.old/db_data/ibdata1 matches
Binary file .local/share/akonadi.old/db_data/akonadi/parttable.ibd matches
Binary file .local/share/akonadi.old/db_data/ib_logfile0 matches

I opened the files with vim and found out they contained vCards and iCalendar blobs. So instead of directly storing them on the file system, where they are easily accessible, they are stored in the DB files. I figured it would be easiest to just extract the data from the binary files. I’ve used the following script:

import sys

START_DELIM = "BEGIN:VCALENDAR"
END_DELIM = "END:VCALENDAR"
def main():
    bin_data = sys.stdin.read()
    vcards = []

    start = bin_data.find(START_DELIM)
    while start > -1:
        end = bin_data.find(END_DELIM,start+1)
        vcards.append(bin_data[start:end + len(END_DELIM)])
        start = bin_data.find(START_DELIM, end+1)

    print "n".join(vcards)



if __name__=="__main__":
    main()

It reads binary files from stdin and outputs iCalendar data that is embedded in them. If you change START_DELIM and END_DELIM to VCARD instead of VCALENDAR, it will extract the contacts’ data.

This migration had me thinking how important it is that an application’s data should be easily portable. It’s something I feel not many projects have high enough on their priorities.

Importing CSV to Evolution

I’ve decided to try GNOME on a new machine that I’ve got, and as part of the move I’ve switched to Evolution (from Kontact). I had some contacts stored in a spreadsheet, which I’ve tried to import as CSV to Evolution.

Apparently, unlike Kontact, Evolution won’t ask you what every column means. It would just assume that the CSV is in some weird scheme. If you try to import the CSV, it would force the scheme on your CSV even if it looks completely different. The result: a complete mess of the fields in each contact.

I didn’t find a reference for how Evolution expects its CSVs to look, and I didn’t want to analyze that either. So finally, I’ve set up a virtual machine, loaded it with the openSUSE KDE live CD, imported the CSV into Kontact, and exported it as VCard, which I imported to Evolution.

I believe that the current CSV import in Evolution just causes user frustration, as it doesn’t act as expected.

Other weird problems I’ve encountered in Evolution, which I didn’t solve yet:

  1. Evolution gives me “Could not remove address book” when I try to delete an existing address book. After restarting the program, I’ve succeeded in deleting some of them, but not all of them.
  2. When I imported the VCard from Kontact, the contacts appeared in every address book (except one) and also appeared magically in new address books I’ve created. The contacts in each of the address books seem to be linked together. When I’ve tried to delete them from one address book, they’ve disappeared from the rest as well.

If you know how to solve these issues, I would really like to hear.

Check if a Server Is About to Run fsck

A couple of weeks ago, I installed some updates to my server. When I restarted it, it didn’t come up. To make things worse, the IPMI console decided to go on strike, so I couldn’t see what was really going on. I presumed that the system wasn’t responding because of some kernel panic. After a while, I gave up for that night in the hope that by morning the IPMI would be sorted out. To my surprise, the IPMI was still out of order, but the server was up again. Apparently, the system wasn’t stuck on a kernel panic, but on fsck‘ing the hard disks. So, in order to avoid such problems in the future, I looked for a way to tell when the system is going to run fsck after the next reboot (I also got the IPMI fixed).

 $ sudo tune2fs -l /dev/sda6

In the output, you will find the following lines:

Mount count:              2
Maximum mount count:      36
Last checked:             Tue Jul 26 04:49:18 2011
Check interval:           15552000 (6 months)

“Maximum mount count” is the number of mounts after which the filesystem will be checked by fsck. “Check interval” is the maximum time between two filesystem checks. The command also lets you see the actual mount count since the last check and when it took place.

Missing *.la files

Sometimes, when you compile a package, it fails and complains that it can’t find a *.la file for some library that is installed. Recently, I ran into this while compiling dev-libs/gobject-introspection, which complained about a missing libpng14.la. The solution is to run:

sudo lafilefixer --justfixit

It won’t create the .la file, but it will fix the libtool references so that nothing points to it, and packages will compile fine.

Deleting Comments from Tickets in Trac 0.12

About a year ago, I wrote about a way to delete comments from tickets in Trac prior to version 0.12 (as it didn’t exist back then). Basically, the method was to directly delete the comment from the database. Lately, spammers have been harassing one of my Trac installations, bypassing the spam filtering and changing ticket properties. The old method wouldn’t revert those changes. After searching for a solution, I found a little-documented option in Trac 0.12 that allows you to delete comments and revert changes to tickets.

To enable it, go to the admin panel->Plugins->Trac 0.12 and enable TicketDeleter under tracopt.ticket.deleter.*. This will add a “Delete” button right next to the “Reply” and “Edit” buttons of every comment. It will also revert any changes to the ticket properties.

See #3641 and [9270] for the relevant ticket and changeset in Trac’s own Trac.