Kindle Paperwhite “Unable to Open Item”

Recently, I tried transfering some new ebook to my Kindle Paperwhite (first generation), the books were listed properly. However, when I tried to open them I got
“Unable to Open Item” error, suggesting I re-download the books from Amazon. I tried transferring the files again and again, but it didnt’ help. Some of the books were mobi files while others were “AZW` (which I got from אינדיבוק) and all of them opened fine on my computer.

Finally, I followed an advice from a comment in the KindledFans blog, and converted the files to AZW3 (the original comment suggested mobi but AZW3 works better with Hebrew). After converting, I moved the files to my Kindle and they opened just fine.

Enabling Compose-Key in GNOME 3.4

For some reason I couldn’t easily find how to enable the compose-key in Gnome 3.4. All the references I’ve found did not match the actual menus and dialogs that I saw on my system. That is including the official GNOME help pages. So I’ve decided to document it here for my future reference.

  1. Go to System Settings->Keyboard Layout.
  2. Select the Layouts tab and click Options.
  3. Under Compose key position, select the key you want to use as the compose-key.

Wikipedia has a nice table summarizing the compose-key sequences.

gettext with Autotools Tutorial

In this tutorial we walk through the steps needed in order to add localizations to an existing project that uses GNU Autotools as build system.

We start by taking a a slightly modified version of the Hello World example that comes with Automake sources. You can keep track of the changes to the source throughout this tutorial by following the commits to amhello-gettext on GitHub. We start with the following files:

$ ls -RF
.:
configure.ac  Makefile.am  README  src/

./src:
main.c  Makefile.am

Running gettextize

The first step is copying some necessary gettext infrastructure to your project. This is done running gettexize in the root directory of your project. The command will create a bunch of new files and modify some existing files. Most of these files are auto-generated, so there is no need to add them to your version control. You should only add those files you create or modify manually.

You will need to add the following line to your configure.ac.

AM_GNU_GETTEXT([external])
AM_GNU_GETTEXT_VERSION(0.18)

The version specified is the minimum required version of gettext your package can compile against.

Copy po/Makevars.template to po/Makevars and modify it as needed.

The next step is to copy over gettext.h to your sources.

$ cp /usr/share/gettext/gettext.h src/

libintl.h, which is the header that provides the different translation functions. gettext.h is a convenience wrapper around it which allows disabling gettext if the --disable-nls is passed the ./configure script. It is recommended to use gettext.h in favor of libintl.h.

Triggering gettext in main()

In order for gettext to work, you need to trigger it in your main(). This is done by adding the following lines to the main() function:

setlocale (LC_ALL, "");
bindtextdomain (PACKAGE, LOCALEDIR);
textdomain (PACKAGE);

You should also add #include "gettext.h to the list of includes.

PACKAGE should be the name of your program, and is usually defined in config.h file generated by either autoconf or autoheader. To define LOCALEDIR we need to add the following line to src/Makefile.am:

AM_CPPFLAGS = -DLOCALEDIR='"$(localedir)"'

If AM_CPPFLAGS is already defined, just append to it the -DLOCALEDIR='"$(localedir)"' part.

Marking strings for translation

At this point, your program should compile with gettext. But since we did not translate anything yet it will not do anything useful. Before translating we need to mark the translatable strings in the sources. Wrap each translatable string in _(...), and add the following lines to each file that contains translatable strings:

#include "gettext.h"
#define _(String) gettext (String)

Extracting string for translation

Before extracting the strings, we need to tell gettext where to look. This is done by listing each source file with translatable strings in po/POTFILES.in. So in our example po/POTFILES.in should look like:

# List of source files which contain translatable strings.
src/main.c

Afterwards the following command can be used to actually extract the strings to po/amhello.pot (which should go in the version control):

make -C po/ update-po

If you haven’t ran ./configure yet you need to run autoreconf --install && ./configure before running the above make command.

Translating strings

To begin translating, you need to a *.po file for your language. This is done using msginit:

cd po/ && msginit --locale he_IL.utf8

The locale should be specified as two-letter language code followed by two-letter country code. In my example, I’ve used Hebrew, hence it will create a po/he.po file. To translate the program you edit the .po file, using either a text editor or a dedicated program (see list of editors here).

After you updated the .po file for your language, list the language in po/LINGUAS (you need to create it). For example, in my case:

# Set of available languages
he

Now you should be ready to compile and test the translation. Unfortunately, gettext requires installing the program in order to properly load the message catalogs, so we need to call make install.

./configure --prefix /tmp/amhello
make
make install

Now to check the translation simply run /tmp/amhello/bin/hello (you might need to change LC_ALL or LANGUAGES depending on your locale to see the translation).

$ LANGUAGE=he /tmp/amhello/bin/hello 
שלום עולם!

Final note about bootstrapping. When people checkout your code from the version control, many autogenerated files will be missing. The simplest way to bootstrap the code into a state which you can simple call ./configure && make is by using autoreconf:

autoreconf --install

Will add any missing files and run all the autotools friends (aclocal, autoconf, automake, autoheader`, etc.) in the right order. Additionally it will callautopointwhich copies the necessarygettextfiles that were generated when you calledgettextize` earlier in the tutorial. If your project is using ./autogen.sh script that call the autotools utilities manually, you should add a call to autopoind --force before the call to aclocal.

Finally, those are the files that end up version controlled in our example:

$ ls -RF
.:
configure.ac  Makefile.am  po/  README  src/

./po:
amhello.pot  he.po  LINGUAS  Makevars  POTFILES.in

./src:
gettext.h  main.c  Makefile.am

Refrences

Displaying Google Adsense in MediaWiki

This post shows how to insert code that displays ads in MediaWiki. The proposed methods use hooks instead of modifying the skin. This has two advantages:

  1. No need to modify each skin separately. This allows users to change skins and ads will be presented to them in the same logical place.
  2. It makes upgrades simpler. Hooks reside in LocalSettings.php which isn’t modified by MediaWiki version upgrades, unlike skins.

The examples below show how to insert ads into the header, footer and sidebar of each page. I’ve used the Google Adsense ad-serving code, but it could be easily replaced by the ad-serving code of any other ad network.
Continue reading Displaying Google Adsense in MediaWiki

View Failed Login Attempts – lastb

The lastb command can be used to list failed login attempts. By default it displays a nice table of all failed attempts including the username, time and host the attempt had originated from.

sudo lastb -w | cut -d " " -f 1 | sort | uniq | less

The -w tells lastb to display full username. The cut, sort and uniq turn the output of lastb to sorted list that contains each user name only once.

When I ran it recently on my server I found some interesting results. Nobody tried in the last fortnight to login with root but they did try with r00t, root2, root3, roottest, rootuser and a bunch of similar ones. There were a bunch of generic users such as admin, support, test, user, sales and surprising number of software related ones: wordpress, wp, stunnel, mysql, moodle, mongodb, minecraft etc.

Another useful command is

$ sudo lastb -f /var/log/btmp.1 -w -i | awk '{print $3}' | sort | uniq --count | sort -nr | less

which lists hosts sorted by the number of failed attempts originated from each host.

Overall in the last two weeks my server experienced more that 3300 failed login attempts using more than 800 unique usernames. Fortunately, as my server only allows public-key authentication via ssh all those attempts are pretty futile.

Introducing mdview – a lightweight Markdown viewer

My favorite editor is vim, but it has downsides as well. Vim doesn’t have a the GUI needed to extend it to preview things like Markdown properly. Yeah, sure vim can highlight Markdown syntax, but that is not a replacement for real previewing. With that itch in mind, I searched for a solution but found none that satisfied me. For reStructuredText I’ve found a solution that worked well. It worded by starting a local web-server and does the previewing in the browser. Inspired by it, I started writing mdview.

mdview allows you to instantly preview in your favorite browser any Markdown file you’re editing. It will automatically refresh when the file is changed, hence it great for working with the editor and browser side-by-side for live preview.
Continue reading Introducing mdview – a lightweight Markdown viewer

Outbrained – Greasemonkey script to remove tracking Outbrain links

Outbrain is a service that provides related content links to publishers. It is used by some news sites I frequent and recently I’ve been annoyed by its tracking behavior. When you hover with your cursor above the link it seems like a regular benign link but once you click on the link, it changes to an evil tracking URL. To add to the annoyance, it is not always easy to distinguish Outbrain “ads” from legitimate links on first sight.

To end this annoyance for me, I’ve written a little Greasemonkey script. It currently setup to work for Haaretz, Ynet, Calcalist and TheMarker but it should work fine for any site using Outbrain if enabled.

Download: outbrained.user.js


// ==UserScript==
// @name Outbrained
// @namespace http://www.guyrutenberg.com
// @description Removes annoying tracking outbrain links.
// @include http://www.haaretz.co.il/*
// @include http://www.ynet.co.il/*
// @include http://www.calcalist.co.il/*
// @include http://www.themarker.com/*
// @version 1.0
// @grant none
// ==/UserScript==
// Change Log
// ==========
// 1.0: 2014-08-01
// * Initial release.
function isOutbrainLink(element) {
var onmousedown_attr = element.getAttribute('onmousedown');
return /^this\.href='http:\/\/[^.]+\.outbrain\.com/.test(onmousedown_attr);
}
function fixLink(element, index, array) {
element.removeAttribute('onmousedown');
element.removeAttribute('onclick');
}
var all_links = document.getElementsByTagName('a');
for (var i = 0; i < all_links.length; i++) {
var link = all_links[i];
if (!isOutbrainLink(link))
continue;
fixLink(link);
}

wxWidgets 2.8 to 3.0 Migration: Converting wxString to Numbers

wxWidgets provides a set of utility methods to converts wxString to various integer types such as ToLong(). While the documentation for those functions remained roughly the same between wxWidgets 2.8 and 3.0 the implementation did change. In wxWidgets 2.8, if the string was empty, using any of the number converstion functions would result in the value 0. But, in wxWidgets 3.0 it’s different as can be learned from the following comment in wxstring.cpp:

// notice that we return false without modifying the output parameter at all if
// nothing could be parsed but we do modify it and return false then if we did
// parse something successfully but not the entire string

This means that if you relied on ToLong() to store 0 to the pointer to long when given empty string, in wxWidgets 3.0 you will get uninitialized value there.

I also noticed when comparing the code of wxString in 2.8 and 3.0, that they implemented the integer conversion functions using C macros, while in 2.8 they used templates. I wonder why it was changed, as it looks more like a regression to me.

C++ : mt19937 Example

C++11 introduces several pseudo-random number generators designed to replace the good-old rand from the C standard library. I’ll show basic usage examples of std::mt19937, which provides a random number generation based on Mersenne Twister algorithm. Using the Mersenne Twister implementation that comes with C++1 has advantage over rand(), among them:

  1. mt19937 has much longer period than that of rand, e.g. it will take its random sequence much longer to repeat itself.
  2. It much better statistical behavior.
  3. Several different random number generator engines can be initiated simultaneously with different seed, compared with the single “global” seed srand() provides.

The downside is that mt19937 is a bit less straight-forward to use. However, I hope this post will help with this point :-).
Continue reading C++ : mt19937 Example

Make Offline Mirror of a Site using `wget`

Sometimes you want to create an offline copy of a site that you can take and view even without internet access. Using wget you can make such copy easily:

wget --mirror --convert-links --adjust-extension --page-requisites 
--no-parent http://example.org

Explanation of the various flags:

  • --mirror – Makes (among other things) the download recursive.
  • --convert-links – convert all the links (also to stuff like CSS stylesheets) to relative, so it will be suitable for offline viewing.
  • --adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content-type.
  • --page-requisites – Download things like CSS style-sheets and images required to properly display the page offline.
  • --no-parent – When recursing do not ascend to the parent directory. It useful for restricting the download to only a portion of the site.

Alternatively, the command above may be shortened:

wget -mkEpnp http://example.org

Note: that the last p is part of np (--no-parent) and hence you see p twice in the flags.