Duply credential error when using Amazon S3

Duply is a convenient wrapper around duplicity, a tool for encrypted incremental backups I’ve used for the last couple of years. Recently, after a recent upgrade, my Amazon S3 backups failed, reporting the following error:

    'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials

Boto, the backend duplicity relies on for the Amazon S3 backend, requires to pass authentication parameters through the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. As different backends require different variables, duply used to make that transparent, one would just set TARGET_USER and TARGET_PASS and duply would take care of the rest. However, duply 1.10 broke compatibility and requires you to set the variables yourself. Hence, the fix is to replace the TARGET_* variables with exported AWS_* variables:

# TARGET_USER='XXXXXXXXXXXX'
# TARGET_PASS='XXXXXXXXXXXX'
export AWS_ACCESS_KEY_ID='XXXXXXXXXXXX'
export AWS_SECRET_ACCESS_KEY='XXXXXXXXXXXX'

Patching an Existing Debian Package

This tutorial walks you through patching an existing Debian package. It is useful if you want to create a .deb of a package after fixing some bug or modifying the source in any other way. We will hugin as our example.

We start by fetching the source package

$ apt-get source hugin
$ cd hugin-2017.0.0+dfsg

We will need a tool named quilt to make the process easier.

# apt install quilt

Before using quilt we want to make it aware of the debian/patches directory which holds the patches. Adding the following lines to ~/.quiltrc will make quilt search up the directory tree the debian/patches directory.

d=. ; while [ ! -d $d/debian -a `readlink -e $d` != / ]; do d=$d/..; done
if [ -d $d/debian ] && [ -z $QUILT_PATCHES ]; then
        # if in Debian packaging tree with unset $QUILT_PATCHES
        QUILT_PATCHES="debian/patches"
        QUILT_PATCH_OPTS="--reject-format=unified"
        QUILT_DIFF_ARGS="-p ab --no-timestamps --no-index --color=auto"
        QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
        QUILT_COLORS="diff_hdr=1;32:diff_add=1;34:diff_rem=1;31:diff_hunk=1;33:diff_ctx=35:diff_cctx=33"
        if ! [ -d $d/debian/patches ]; then mkdir $d/debian/patches; fi
fi

Now starts the actual patching process. The patches are applies in series, and our new patch should be the last one. We start by applying any existing patches, and then creating a new patch

$ quilt push -a
$ quilt new 44_setlocale.patch

I chose the 44_ prefix because hugin already has a patch named 43_fallbackhelp.patch and the convention is naming patches so the names reflect the order they are applied. Next we specify to quilt which files we modify and then we edit them.

$ quilt add src/hugin1/hugin/huginApp.cpp
$ vim src/hugin1/hugin/huginApp.cpp

Alternatively, instead of editing the files manually, quilt import can be used to import an existing patch.

Each patch comes with its own metadata to let other people know who wrote it and what it does. Use

$ quilt header --dep3 -e

to edit this metadata. For example:

Description: Call setlocale()
This fixes a bug in wxExecute, see https://trac.wxwidgets.org/ticket/16206
The patch has been submitted to upstream, https://groups.google.com/d/msg/hugin-ptx/FCi7ykPDZ5E/3w8E5U1SCQAJ
Author: Guy Rutenberg <guyrutenberg@gmail.com>
Last-Update: 2017-10-04
---
This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
See [DEP-3](http://dep.debian.net/deps/dep3/) for more details about the different fields.

After we finish editing we finalize the patch, and unapply all the patches

$ quilt refresh
$ quilt pop -a

Now, you can continue to build the deb from source as usual. We use debchange to create a new version, and debuild to build the package. After the package is build it can be installed using debi

$ DEBEMAIL="Guy Rutenberg <guyrutenberg@gmail.com>" debchange --nmu
$ debuild -us -uc -i -I -j7

Sources:

Use `mk-build-deps` instead of `apt-get build-dep`

If you are building a package from source on Debian or Ubuntu, usually the first step is to install the build-dependencies of the package. This is usually done with one of two commands:

$ sudo apt-get build-dep PKGNAME

or

$ sudo aptitude build-dep PKGNAME

The problem is that there is no easy way to undo or revert the installation of the build dependencies. All the installed packages are marked as manually installed, so later one cannot simply expect to “autoremove” those packages. Webupd8 suggests clever one-liner that tries to parse the build dependencies out of apt-cache and mark them as automatically installed. However, as mentioned in the comments, it may be too liberal in marking packages as automatically installed, and hence remove too many packages.

The real solution is mk-build-deps. First you have to install it:

$ sudo apt install devscripts

Now, instead of using apt-get or aptitude directly to install the build-dependencies, use mk-build-deps.

$ mk-build-deps PKGNAME --install --root-cmd sudo --remove

mk-build-deps will create a new package, called PKGNAME-build-deps, which depends on all the build-dependencies of PKGNAME and then install it, therefore pulling all the build-dependencies and installing them as well. As those packages are now installed as dependencies they are marked as automatically installed. Once, you no longer need the build-dependencies, you can remove the package PKGNAME-build-deps, and apt will autoremove all the build-dependencies which are no longer necessary.

Calculate Google Drive Folder Size Using `rclone`

Google Drive lacks a very basic feature: calculating folder size. There is no solution in the web interface to view the total size of a given directory. There are a couple of dubious looking online “folder size analyzers” request access permissions to your Google Drive and offer you the basic functionality of calculating folder size. While those apps, have a legitimate need for access permission to your account, you may consider given those permissions to random apps a questionable decision.

The (very) useful tool rclone, which provides rsync like interface to many cloud storage providers, implements this functionality. After configuring rclone to work with your Google Drive, use the size command to determine the size of a given folder:

$ rclone size "gdrive:Pictures/"
Total objects: 1421
Total size: 9.780 GBytes (10501374440 Bytes)

The size is calculated recursively. However, there is no simple way to display the size of each sub-directory recursively.

One Line Implementations of GCD and Extended GCD in Python

Sometimes you need quick and dirty implementation of greatest common divisor and its extended variant. Actually, because the fractions module comes gcd function, probably the implementation of the exteded greatest common divisor algorithm is more useful. Both implementations are recursive, but work nonetheless even for comparatively large integers.

# simply returns the gcd
gcd = lambda a,b: gcd(b, a % b) if b else a

# egcd(a,b) returns (d,x,y) where d == a*x + b*y
egcd = lambda a,b: (lambda d,x,y: (d, y, x - (a // b) * y))(*egcd(b, a % b)) if b else (a, 1, 0)

The code above is Python 2 and 3 compatible.

`rsync` and FAT File-Systems

I’m using rsync to sync files from my computer to a FAT formatted SD card. Using the --update flag to rsync makes it “skip files that are newer on the receiver”. It seems that it should work as follows: After the first sync, any subsequent syncs will only transfer those files which changed in the meantime. However, I noticed that transfer times are usually longer than expected, which led me to think that things are not working as they should.

As --update relies only on modification date, obviously something is wrong with it. After ruling out several other possibilities, I’ve guessed that the modification date of some files get mangled. A quick check about FAT revealed that FAT can only save modification times in 2 second granularity. I’ve also used rsync’s --archive flag, which among other things attempts to preserve modifications times of the files it copies. But what about FAT’s lower granularity for modification time? That was apparently the culprit. Some files when copied got a modification time which was up to 1 second before the original modification time! Hence, every time I’ve synced, from rsync’s perspective, the target was older than the source, and hence needs to be overwritten.

This can be demonstrated by the following session:
Continue reading `rsync` and FAT File-Systems

Let’s Encrypt: Reload Nginx after Renewing Certificates

Today I had an incident which caused my webserver to serve expired certificates. My blog relies on Let’s Encrypt for SSL/TLS certificates, which have to be renewed every 3 months. Usually, the cronjob which runs certbot --renew takes care of it automatically. However, there is one step missing, the server must reload the renewed certificates. Most of the time, the server gets reloaded often enough so everything is okay, but today, its been a quite a while since the last time since the nginx server was restarted, so expired certificates were served and the blog became unavailable.

To workaround it, we can make sure nginx reloads it configuration after each successful certificate renewal. The automatic renewal is defined in /etc/cron.d/certbot. The default contents under Debian Jessie are as follows:

# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew

The last line makes sure certificate renewal runs twice a day. Append --renew-hook "/etc/init.d/nginx reload" to it, so it looks like this:

0 */12 * * * root test -x /usr/bin/certbot && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "/etc/init.d/nginx reload"

The --renew-hook runs the next argument after each successful certificate renewal. In our case we use it to reload the nginx configuration, which also reloads the newly renewed certificates.

Update 2019-02-27:

renew-hook has been deprecated in recent versions of certbot. Plus, debian moved from using cronjobs for automatic renewals to systemd timer if they are available. On the other hand, now certbot supports having hooks in configuration files. So, instead of what is described above, i would suggest creating a file /etc/letsencrypt/renewal-hooks/deploy/01-reload-nginx with the following content:

#! /bin/sh
set -e

/etc/init.d/nginx configtest
/etc/init.d/nginx reload

don’t forget to make the file executable.

LyX: Missing title when passing `ignorenonframetext` to Beamer

Passing ignorenonframetext as an option to Beamer, causes it to ignore all the text that is not inside a frame. It is useful when you want to add content for the article version of the presentation (or simply script lines for yourself) that would not show in the regular presentation. LyX puts the title elements outside any frame. Therefore, if you use ignorenonframetext you end up missing the title frame. The solution is to manually wrap the title block (the title, author, institute, etc.) in a frame and append to it \maketitle. This will cause the title frame to be rendered correctly.

Optimize LaTeX PDF Output for Kindle

Kindle can display PDFs, but usually the result is very hard to read. Normal PDFs are not suitable, especially when it comes to paper size for the, relatively, small display of the Kindle. For a forthcoming project, which I intend to write in LaTeX and read on Kindle, I looked into optimizing the document settings so the result will be rendered in a readable manner on Kindle.

I’ve started with the normal article class. The result, is not good at all:
article While Kindle zooms-in automatically to remove, the usually very wide, margins LaTeX uses, the big (A4) paper size, still results in a tiny font on the Kindle display. Switching to Koma-Script, is a bit better, but mainly provides better mechanisms to control the paper size for later experiments.
scrartcl

The next try, is simply to use the A5 paper size. The result is getting better, but the paper size is still too big. Setting the paper size manually to 12cm by 9cm (the screen’s physical dimensions) and setting the pagestyle to empty (removes the page numbering among other things) results in a much better results because of the (still) wide margins and the auto-zoom in a font size that is too big and not enough content fits in a page:
scrartcl_12x9

Finally, by manually setting the text area to be a bit smaller (11cm by 8cm) than the paper size, results in small margins and very little auto-zoom. The output can be clearly read on the Kindle, and still quite a bit of text fits on a single page:


The LaTeX code for the last example is:

\documentclass[DIV=calc,paper=9cm:12cm,pagesize]{scrartcl}

\areaset{8cm}{11cm}
\pagestyle{empty}
\usepackage{lipsum}
\begin{document}
\lipsum
\end{document}

KOMA-Script: Specifying Binding Correction for RTL Documents

The KOMA-Script bundle provides an option to specify the amount of binding correction needed in order to compensate for the width lost in the binding process. By default, it is added to the left margin, which is where the binding is applied for Left to Right languages. However, if a document is written in Hebrew or Arabic, one binds it on the right. The KOMA-Script manual does not consider that option. After a bit of playing I’ve found out that simply using a negative value for the binding correction works.

For example, if in an English document you would use

\documentclass[BCOR=8.25mm]{scrreprt}

For Hebrew you would set

\documentclass[BCOR=-8.25mm]{scrreprt}