Dealing with Spam – Follow-up

In the beginning of June, I wrote about the rising number of spam missed by Akismet. The main issue was a noticeable increase in the number of spam messages which get through Akismet, which is kind of the de-facto spam filtering for WordPress. Twice a day, on avearage, I had to manually mark comments as spam, which really got under my skin. After writing that post, I’ve looked at a number of solutions.
Continue reading Dealing with Spam – Follow-up

Something gone wrong with Akismet?

Akismet is a great spam filtering service for WordPress which did wonders for my blog. Actually, it’s quite generic and can be used with any commenting service, for example with Trac (I used this for Open Yahtzee’s Trac before reverting back to SourceForge new ticket system). For a long time, Akismet allowed me to blog and not worry much about spam, as it hardly missed – usually less than 5 missed spams a month. But something went wrong in the last three months as can be seen in this chart:

spam_chart

As you can see, the number of missed spam increased rapidly from February to May (more than 15-fold) while the number of overall spam messages decreased. I have to manually mark the missed spam and I really can’t say why some of them are missed. They are spammy as always and surely not unique in any sense.

Although it’s not a deluge of missed spam, I really don’t like dealing with, so I consider adding CAPTCHA to supplement Akismet. This will also help with my backups, because Akismet keeps all the spam messages it flags 15 days, which means that unfortunately I backup more than 20000 spam messages each week (hopefully, one day I’ll find good use for it).

Has something gone wrong with Akismet? Do you experience the same problems?

spass-3.1 Secure Password Generator Released

Usually release announcements go together with the actual release. Somehow, I’ve postponed writing about the new release for quite some time, but better late than never.

spass is a tool that creates cryptographically strong passwords and passphrases by generating random bits from your sound card. It works by passing noise from the sound card through a Von Neumann process to remove bias and then uses MD5 to “distill” a truly random bit from every 4 bits of input.

The new version of spass, version 3.1, was released two months ago. The code should now compile easily on both Linux (ALSA, OSS and PortAudio backends) and Windows (only PortAudio is supported). There is some minor tweaks to the CLI, but the main part is a new Qt interface, screenshots of it available on the project’s SourceForge page. I’ve also migrated the build system to CMake (from automake) which should make it easier to build.

You can download the sources, 64bit Debian package and binaries for windows from here. If you use spass and create binary packages for more platforms, it will be great.

BTW as you can see I’ve migrated the code to SourceForge from GitHub. I know it not a popular move, but their lack of binary downloads is really frustrating.

Securing Access to phpMyAdmin on Lighttpd via SSH

phpMyAdmin lets easily manage your MySQL databases, as such it also presents a security risk. Logging in to phpMyAdmin is done using a username and password for the database. Hence, if someone is able to either eavesdrop or guess by brute-force the username and password could wreak havoc of your server.

A possible solution to the eavesdropping problem, is to use SSL to secure the communication to the phpMyAdmin. However, SSL certificates don’t present any method to stop brute-forcing. To prevent brute-forcing attempts, you could limit access to your IP address. However, most of us don’t have static IPs at home. The solution I came up with, kinds of combines both approaches.

Instead of using SSL to encrypt the data sent, I’m using SSH and instead of limiting access to my IP address, I’ll limit access to the server’s IP address. How will it work? First we start by editing the phpMyAdmin configuration for lighttpd. This usually resides in /etc/lighttpd/conf-enabled/50-phpmyadmin.conf. At the top of the file you’ll find the following lines:

alias.url += (
        "/phpmyadmin" => "/usr/share/phpmyadmin",
)

These lines define the mapping to the phpmyadmin installation, without it the phpMyAdmin wouldn’t be accessible. We use lighttpd’s conditional configuration to limit who is able to use that mapping by changing the above lines to:

$HTTP["remoteip"] == "85.25.120.32" {
        alias.url += (
                "/phpmyadmin" => "/usr/share/phpmyadmin",
        )
}

This limit access to the phpMyAdmin only to clients whose IP is the server’s IP (of course you’ll need to change that IP to your server’s IP). This stops curtails any brute-forcing attempts, as only someone trying to access the phpMyAdmin from the server itself will succeed.

But how can we “impersonate” the server’s IP when we connect from home? The easiest solution would be to use to the SOCKS proxy provided by SSH.

ssh user@server.com -D 1080

This will setup a SOCKS proxy on port 1080 (locally) that will tunnel traffic through your server. The next step is to instruct your browser of OS to use that proxy (in Firefox it can be done via Preferences->Advanced->Network->Connection Settings, it can also be defined globally via Network Settings->Network Proxy under Gnome). This achieves both of our goals. We are now able to connect to the server while using its own IP and our connection to the server is encrypted using SSH.

This method can be used to secure all kinds of sensitive applications. We could have achieved the same thing by using a VPN, but it’s more hassle to setup compared to SSH which is available on any server.

Incremental WordPress Backups using Duply (Duplicity)

This post outlines how to create encrypted incremental backups for WordPress using duplicity and duply. The general method, as you will see is pretty generic, and I’ve been using it successfully to backup also Django sites and MediaWiki installations. You can use this method to make secure backups to almost any kind of service imagineable: ftp, sftp, Amazon S3, rsync, Rackspace Open Cloud, Ubuntu One, Google Drive and whatever else you can think about (as long as the duplicity folks implemented :-)). If you prefer a simpler solution, and don’t care about incremental or encrypted backups, see my Improved FTP Backup for WordPress or my WordPress Backup to Amazon S3 Script.
Continue reading Incremental WordPress Backups using Duply (Duplicity)

Manually Install SSL Certificate in Android Jelly Bean

Apparently it’s pretty easy, but there are some pitfalls. The first step is to export the certificate as a DER encoded X.509 certificate. This can be done using Firefox (on a PC) by clicking on the SSL’s lock sign in the address bar, More Information -> View Certificate -> Details -> Export. The exported certificate needs to be saved on the root directory of the internal storage of the phone, with *.cer extension (or *.crt). Other extensions will not work.

Afterwards, on the phone, click on “Install from device storage” under Settings->Security->Credential Storage. If you did everything as you should at the previous step, it will display the certificate name, and ask you to confirm its installation. If you’ve exported the certificate as the wrong format, gave it the wrong extension or placed it somewhere else than the root of the internal storage, it will display the following error:

No certificate file found in USB storage

If you see it, just make sure you are exporting the certificate correctly and saving it at the right place.

More details: Work with certificates (Geared towards Galaxy Nexus, but should apply to any Android 4.0 and above.

Updated Aug 2015: Fixed a broken link.

GitHub Stops Offering Binary Downloads

Only few month ago, almost anyone would swear by GitHub and curse SourceForge. GitHub was (and probably still) the fastest growing and by now the largest code repository, while SourceForge was the overthrown king. SourceForge looks like an archaic service despite some major facelifts while GitHub is the cool kid on the block. Recently, GitHub showed us why SourceForge is still relevant for the open-source community.

Back in December, GitHub dropped their support for downloading files from outside the code repository. They say that they believe that code should be distributed directly from the git repository. This is probably fine for projects written in dynamic languages (such as python, ruby, javascript) where no binary distribution is expected. However, this seems to me like a blow to any GitHub hosted C/C++ project. No one expects lay users to compile projects directly from source, it a hassle for most people except developers (and possibly Gentoo users :-)).

It might be a good idea on GitHub team, as they promote themselves as a developer collaboration tool, and also most of their projects a indeed in dynamic languages (see the top languages statistics). The GitHub teams offers in their post two solutions: Uploading files to Amazon S3 and switching to SourceForge, and I’ve read at least a few people recommending putting binary releases in the git repository (bad idea).

Overall, I think this move by GitHub, just turned SourceForge into the best code repository (for compiled code) once again.

Vim: Creating .clang_complete using CMake

The clang_complete plugin for Vim, offers superior code completion. If your project is anything but trivial, it will only do so if you provide .clang_compelete file with the right compilation argument. The easy way to do so is by using the cc_args.py script that comes with it to record the options directly into the .clang_compelete file. Usually one does

make CXX='~/.vim/bin/cc_args.py clang++'

However, the makefile generated by CMake doesn’t support the CXX configuration.

The solution is to call CMake with the CXX environment variable set:

CXX="$HOME/.vim/bin/cc_args.py clang++" cmake ..
make

Note that this will create the clang_complete file in the build directory (I’ve assumed out-of-place build), so just copy over the file to the working dir of your vim so it can find it. You’ll need to re-run cmake again (without the CXX, to disable re-creating the .clang_complete file each time.

While looking for this solution, I’ve first tried solving it by setting the CMAKE_CXX_COMPILER variable in CMake, however for some strange reason it didn’t like it, saying that the compiler wasn’t found (it shuns command line arguments given in the compiler command).

The more I use clang_compelete the more awesome I find it. It has it quirks but nonetheless it’s much simpler and better than manually creating tag files for each library.

Updated 6/1/2014: When setting CXX use $HOME instead of ~ (fix issues with newer versions of CMake).

Using std::chrono::high_resolution_clock Example

5 years a go I’ve showed how to use clock_gettime to do basic high_resolution profiling. The approach there is very useful, but unfortunately, not cross-platform. It works only on POSIX compliant systems (especially not windows).

Luckily, the not-so-new C++11 provides, among other things, interface to high-precision clocks in a portable way. It’s still not a perfect solution, as it only provides wall-time (clock_gettime can give per process and per thread actual CPU time as well). However, it’s still nice.

#include <iostream>
#include <chrono>
using namespace std;
 
int main()
{
	cout << chrono::high_resolution_clock::period::den << endl;
	auto start_time = chrono::high_resolution_clock::now();
	int temp;
	for (int i = 0; i< 242000000; i++)
		temp+=temp;
	auto end_time = chrono::high_resolution_clock::now();
	cout << chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":";
	cout << chrono::duration_cast<chrono::microseconds>(end_time - start_time).count() << ":";
	return 0;
}

I’ll explain a bit the code. chrono is the new header files that provides various time and clock related functionality of the new standard library. high_resolution_clock should be, according to the standard, the clock with the highest precision.

cout << chrono::high_resolution_clock::period::den << endl;

Note, that there isn’t a guarantee how many the ticks per seconds it has, only that it’s the highest available. Hence, the first thing we do is to get the precision, by printing how many many times a second the clock ticks. My system provides 1000000 ticks per second, which is a microsecond precision.

Getting the current time using now() is self-explanatory. The possibly tricky part is

cout << chrono::duration_cast<chrono::seconds>(end_time - start_time).count() << ":";

(end_time - start_time) is a duration (newly defined type) and the count() method returns the number of ticks it represents. As we said, the number of ticks per second may change from system to system, so in order to get the number of seconds we use duration_cast. The same goes in the next line for microseconds.

The standard also provides other useful time units such as nanoseconds, milliseconds, minutes and even hours.

Installing Citrix Receiver on Ubuntu 64bit

It’s a hassle.

The first step is to grab the 64bit deb package from Citrix website. Next install it using dpkg:

~$ sudo dpkg --install Downloads/icaclient_12.1.0_amd64.deb

This results in the following error:

dpkg: error processing icaclient (--install):
 subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
 icaclient

Which can be fixed by changing line 2648 in /var/lib/dpkg/info/icaclient.postinst:

         echo $Arch|grep "i[0-9]86" &gt;/dev/null

to:

         echo $Arch|grep -E "i[0-9]86|x86_64" &gt;/dev/null

And then execute

~$ sudo dpkg --configure icaclient

Credit for this part goes to Alan Burton-Woods.

Next, when trying to actually use the Citrix Receiver to launch any apps, I’ve encountered the following error:

Contact your help desk with the following information:
You have not chosen to trust "AddTrust External CA Root", the
issuer of the server's security certificate (SSL error 61)

In my case the missing root certificate was Comodo’s AddTrust External CA Root, depending on the certificate used by the server you’re trying to connect to, you may miss some other root certificate. Now you can either download the certificate from Comodo, or use the one in /usr/share/ca-certificates/mozilla/AddTrust_External_Root.crt (they are the same). Either way, you should copy the certificate to the icaclient certificate directory:

$ sudo mv /usr/share/ca-certificates/mozilla/AddTrust_External_Root.crt /opt/Citrix/ICAClient/keystore/cacerts/

These steps got Citrix working for me, but your mileage may vary.