Modified Variant Whitespace Template

Variant Whitespace is a nice minimalistic template by Andreas Viklund.

Andreas chose to put the sidebar above the content, which I prefer not to do. Furthermore as the sidebar was a “float” that came before the content, it caused additional inconveniences. E.g. if you had an element with clear: both it would be pushed bellow the sidebar. I’ve patched it a bit in order to fix those issues. You can find my modified version here: variant-whitespace.tar.gz

Author: (no author) not defined in authors file

I came across the following error message

Author: (no author) not defined in authors file

when I tried to import an SVN repository (Open Yahtzee‘s) to Git using git svn and I specified an authors file (using -A). Indeed, the first commit to the svn (which was done using cvs2svn) had no username for the commiter. Apperantly the workaround is to add the following line to your author file.

(no author) = no_author <no_author@no_author>

I tried also doing the same without an email address, but it just didn’t work. It seems Git requires that all authors have an email address.

passha – A hashapass variant

I like the idea of hashapass, but I’m unwilling to use an online tool, as I fear that someday it might be compromised. So I wrote down my own variant of hashapass. It uses slightly longer passwords, and sha256 as the hash function.

#! /usr/bin/python

"""
passha.py - Generate passwords from a master password and a parameter.

Based on hashapass (http://hashapass.com)
"""
import hmac
import hashlib

def main(passwd, param):
    hm = hmac.HMAC(passwd, param, hashlib.sha256)
    print hm.digest().encode("base64")[:10]
    
if __name__=="__main__":
    import getpass
    passwd = getpass.getpass()
    param = raw_input("Parameter: ")
    main(passwd, param)

Reinstall grub in Ubuntu

My brother asked me to repair his boot loader, after he accidentally erased his MBR. This can be done easily via LiveCD and the command line.

Boot the system using a LiveCD (I’ve used Ubuntu from USB stick) and do the following:

$ sudo mount /dev/sda /mnt
$ sudo mount --bind /usr/sbin /mnt/usr/sbin
$ sudo mount --bind /usr/lib /mnt/usr/lib
$ sudo mount --bind /dev/ /mnt/dev
$ sudo chroot /mnt

# grub-install /dev/sda

I hope it will be useful for others as well, as the Ubuntu community documentations offers a solution based on Boot-Repair, which seems an overkill for me.

Automated Encrypted Backups to S3 Using Duplicity

This tutorial will hopefully guide you in making automated encrypted backups to Amazon’s S3 using duplicity. It was written as a followup for Using Duplicity and Amazon S3 – Notes and Examples, in order to organize all the necessary information into a simple tutorial.

Will start by creating a simple wrapper to duplicity:

#! /usr/bin/python
import sys
import os

duplicity_bin = '/usr/bin/duplicity'

env = {
    'AWS_ACCESS_KEY_ID':     'PUT YOUR KEY ID HERE',
    'AWS_SECRET_ACCESS_KEY': 'PUT YOUR SECRET ACCESS KEY HERE',
    'PASSPHRASE':            'PUT ENCRYPTION PASSPHRASE',
}
env.update(os.environ)

os.execve(duplicity_bin, sys.argv, env)

save this under dupicity-wrapper.py and chmod 0500 it so only you will be able to read and execute it.

Note: You’ll want to write down the passphrase and store it in a safe location (preferably in two separate locations). So in case you’ll need to restore the backups, you won’t have useless encrypted files.

Now edit your crontab and add a line like the following:

10 1 * * 0 /path/to/duplicity-wrapper.py /path/to/folder/ s3+http://bucket-name/somefolder &>> ~/log/backups.log

This will create a weekly backup for /path/to/folder. The backup will be encrypted with what ever passphrase you’ve given in the duplicity-wrapper.py. The output of the backup process will be saved into ~/log/backups.log.

You should also run

/path/to/duplicity-wrapper.py full /path/to/folder/ s3+http://bucket-name/somefolder

in order to create full backups. You might want to periodically verify your backups:

/path/to/duplicity-wrapper.py collection-status s3+http://bucket-name/somefolder
/path/to/duplicity-wrapper.py verify s3+http://bucket-name/somefolder /path/to/folder/

To check the status of the backups and to verify them.

And last but not least, in case you ever need the backups, you can restore them using:

/path/to/duplicity-wrapper.py restore s3+http://bucket-name/somefolder /path/to/folder/

Security Considerations

As I know, some people will comment on the saving of the encryption passphrase plainly in a file, I will explain my reasoning. I use the above encryption in order to secure my files in case of data leakage from Amazon S3. In order to read my backups, or silently temper with them, some on will have to get the passphrase from my machine. While, this isn’t impossible, I will say it’s unlikely. Furthermore, if someone has access allowing him to read files from my computer, he doesn’t need the backups, he can access the files directly.

I’ve given some thought about making the backups more secure, but it seems you always have to compromise on either automation or incremental backups. But, as I wrote, the current solution seems to me strong enough given the circumstances. Nonetheless, if you’ve got a better solution it would be nice to hear.

Extracting Data from Akonadi (Kontact)

In older versions of KDE, Kontact used to keep it’s data in portable formats. iCalendar files for KOrganizer and vCard for KAddressBook. But sometime ago Kontact moved to akonadi, a more sophisticated backend storage. By default (at least on my machine) Akonadi uses MySQL (with InnoDB) as the perssistent storage. I didn’t consider it thourghly when moving my data to Gnome, and I got stuck with the data.

To make things worth, somewhere along the update to KDE 4.6, I got some of the data moved to ~/.akonadi.old. Being stuck with the InnoDB tables, I tried the following solutions without much success:

  1. Loading the InnoDB tables to a MySQL server. Didn’t fare good, MySQL complained about weird stuff, and I gave up in search of simpler solution.
  2. I booted a OpenSuse virtual machine with KDE and tried loading my old data. Apparently, my ~/.akonadi folder, contained nothing interesting and Suse’s KDE 4.6 refused to load the data ~/.akonadi.old after I renamed it.

So being upset about Akonadi I did some greping and found strings from my contacts and todo lists in the following files:

Binary file .local/share/akonadi.old/db_data/ibdata1 matches
Binary file .local/share/akonadi.old/db_data/akonadi/parttable.ibd matches
Binary file .local/share/akonadi.old/db_data/ib_logfile0 matches

I opened the files with vim, and found out the contained vCards and iCalendar blobs in them. So instead of directly storing them on the file-system, where they are easily accessible, they are stored in the DB files. I figured it would be easiest to just extract the data from the binary files. I’ve used the following script:

import sys

START_DELIM = "BEGIN:VCALENDAR"
END_DELIM = "END:VCALENDAR"
def main():
    bin_data = sys.stdin.read()
    vcards = []

    start = bin_data.find(START_DELIM)
    while start > -1:
        end = bin_data.find(END_DELIM,start+1)
        vcards.append(bin_data[start:end + len(END_DELIM)])
        start = bin_data.find(START_DELIM, end+1)

    print "\n".join(vcards)



if __name__=="__main__":
    main()

It reads binary files from stdin and outputs iCalendar data that is embedded in it. If you change START_DELIM and END_DELIM to VCARD instead of VCALENDAR, it will extract the contacts’ data.

This migration, had me thinking how important it is that application’s data should be easily portable. It’s a thing, I feel not many projects have high enough on their priorities.

Importing CSV to Evolution

I’ve decided to try Gnome on a new machine that I’ve got, and as part of the move I’ve switched to Evolution (from Kontact). I had some contacts stored in a spreadsheet which I’ve tried to import as CSV to Evolution.

Apparently, unlike Kontact, Evolution won’t ask you what every column means. It would just assume that the CSV is in some weird scheme. If you try to import the CSV, it would force the scheme on you CSV even if it looks completely different. The result – a complete mess of the fields in each contact.

I didn’t find the reference for how Evolution expects its CSVs to look like, and I didn’t want to analyse that either. So finally, I’ve set up a virtual machine, loaded it with OpenSuse KDE live cd and imported the CSV into Kontact and exported it as VCard which I imported to Evolution.

I believe, that the current CSV import in Evolution, just causes user frustration, as it doesn’t act as expected.

Other weird problems I’ve encountered in Evolution which I didn’t solve yet:

  1. Evolution is that it gives me “Could not remove address book” when I try to to delete an existing address books. After restarting the program I’ve succeeded in deleting some of them but not all of them.
  2. When I imported the VCard from Kontact, the contacts appeared in every address book (except one) and also appeared magically in new address books I’ve created. The contacts in each of the address books seems to be linked together. When I’ve tried to delete them from one address book, they’ve disappeared from the rest as well.

If you know how to solve these issues I would really like to hear.

Check if a server is about to run fsck

Couple of weeks ago I installed some updates to my server. And when I restarted it, it didn’t came up. To make things worse, the IPMI console decided to go on strike so I couldn’t see what’s really going on. I presumed that the system isn’t responding because of some kernel panic. After a while, I gave up for that night in hope the in the morning the IPMI would be sorted out. To my surprise, the IPMI was still out of work, but the server was up again. Apparently, the system wasn’t stuck on kernel panic, but on fsck‘ing the harddisks. So in order to avoid such problems in the future I looked for a way to tell when the system is going to run fsck after the next reboot (I also had the IPMI fixed).

 $ sudo tune2fs -l /dev/sda6

In the output you will find the following lines:

Mount count:              2
Maximum mount count:      36
Last checked:             Tue Jul 26 04:49:18 2011
Check interval:           15552000 (6 months)

“Maximum mount count” is the number of mounts after which the filesystem will be checked by fsck. “Check interval” is the maximal time between two filesystem checks. The command also lets you see the actual mount count since the last check and when it took place.

Missing *.la files

Sometimes when you compile a package it fails and complains it can’t find an *.la file for some library that is installed. Recently I had it when compiling dev-libs/gobject-introspection which complained about missing libpng14.la. The solution for this is to run:

sudo lafilefixer --justfixit

It won’t create the .la file, but it will fix the libtool references so nothing points to it so packages will compile fine.

Deleting Comments from Tickets in Trac 0.12

About a year ago I wrote about a way to delete comments from tickets in Trac prior to version 0.12 (as it didn’t exist back then). Basically the method was to directly delete the comment from the database. Lately, spammers have been harassing one of my Trac installations, bypassing the spam filtering, and changing ticket properties. The old method wouldn’t revert those changes. After searching for a solution, I found a little documented option in Trac 0.12 that allows to delete comments and revert changes to tickets.

To enable it, go to the admin panel->Plugins->Trac 0.12 and enable TicketDeleter under tracopt.ticket.deleter.*. This will add a “Delete” button right next to the “Reply” and “Edit” buttons of every comment. It will also revert any changes to the ticket properties.

See #3641 and [9270] for the relevant ticket and changset in Trac’s own Trac.