After a recent upgrade to Debian Stretch, my OpenVPN server stopped working. Checking the relevant log file, /var/log/openvpn.log, I found the following not very helpful error message.
Fri Nov 23 16:46:37 2018 OpenVPN 2.4.0 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Jul 18 2017
Fri Nov 23 16:46:37 2018 library versions: OpenSSL 1.0.2l 25 May 2017, LZO 2.08
Fri Nov 23 16:46:37 2018 daemon() failed or unsupported: Resource temporarily unavailable (errno=11)
Fri Nov 23 16:46:37 2018 Exiting due to fatal error
Fortunately, someone already reported this error to debian and found out that the error is caused by the
LimitNPROC=10
in the /lib/systemd/system/openvpn@.service. The LimintNPROC directive in systemd is used to limit the number of forks a service is allowed to make. Removing this limit, might not be the best idea. So, I would suggest that instead of commenting it out, to find out the minimal value that allows OpenVPN to start and work. After some testing, I found that the minimal value that worked for me is
LimitNPROC=27
After editing the /lib/systemd/system/openvpn@.service, you need to reload the systemd service files and restart the OpenVPN server process.
There are three things that need to be updated in Debian Stretch in order to get the Radeon RX 550 running properly (or at all): kernel, mesa and proprietary binary firmware (bummer, I know).
First thing, make sure you have stretch-backports in your apt-sources with all the relevant components.
$ deb http://ftp.debian.org/debian stretch-backports main contrib non-free
Now, the kernel that currently comes with stretch (4.9.0-8) is missing some important configurations: CONFIG_DRM_AMDGPU_SI, and CONFIG_DRM_AMDGPU_CIK. So you will need to install the latest one from the backports which does have the correct configuration.
This will also update the firmware-amd-graphics which provides the binary blobs that are needed by the amdgpu driver to work properly. The old version does not support the new Polaris 12 architecture used by the RX 550, while the version from the backports (20180825) does support Polaris 12.
Now comes the part of upgrading mesa. There are a bunch binary packages that are derived from the mesa source package and we need to upgrade each one of them to version 18 (or later, but 18 is what is provided by the backports). The following two commands will upgrade any mesa related package already installed and then re-mark them as automatically installed (just to keep things tidy as they were).
sudo apt install -t stretch-backports $(grep-status -S mesa -a -FStatus "install ok installed" -s Package -n | sort -u)
sudo apt-mark auto $(grep-status -S mesa -a -FStatus "install ok installed" -s Package -n | sort -u)
(credit for the last two lines). Now you can restart your computer and the RX 550 should work. You can test it using
The DRI_PRIME=1 is necessary, else glxinfo would use the integrated card.
This is not necessary, but if lspcidoes not properly display the RX 550, you will need to update the PCI IDs that are used to translate IDs to actual human-readable names.
$ sudo update-pciids
Final word, if you are using TLP for power management, it may not play nice with the RX 550. With TLP enabled I get pretty horrible performance out of it (regardless of being on AC or battery).
Sharing data between guest and host system is necessary in many scenarios. If the guest is a Linux system, you can simply add a shared folder that will be automatically mounted. However, this does not work if the guest is Windows. Sometimes, you can simply workaround it by using Samba shares, but in some scenarios network configuration makes it difficult. For example, when using usermode networking, the host machine can’t communicate easily via the network with the guest.
However, there is another way to share folders in virt-manager that actually works for Windows guest – SPICE . The first step is to configure the sharing in virt-manager. In the VM details view, click on “Add Hardware” and select a “Channel” device. Set the new device name to org.spice-space.webdav.0 and leave the other fields as-is.
Now start the guest machine and install spice-webdav on the guest machine. After installing spice-webdav make sure the “Spice webdav proxy” service is actually running (via services.msc).
Now running C:\Program File\SPICE webdavd\map-drive.bat will map the shared folder, which is by default ~/Public. If you encounter the following error
System error 67 has occurred.
the network name cannot be found
It means that the Spice webdav proxy service is not running.
If you want to change the shared folder, you will have to use virt-viewer instead of virt-manager, and configure it under File->Preferences.
CHDK adds a RAW shooting option to many point and shoot cameras, including Canon PowerShot SX710HS. As the camera does not apply lens correction automatically for RAWs, one has to fix lens distortion when developing the RAWs. Lensfun is the go-to place for FOSS lens correction, however it did not have lens correction data for the SX710HS. So I’ve decided to calibrate the lens myself.
Without lens correction
With lens correction applied
The Lensfun tutorial for lens calibration is quite cumbersome. It directs you go out and shoot some buildings with straight lines, and then manually track those lines in Hugin and extract the distortion parameters. Well, after searching a bit, I came up with what I think is a better workflow.
I’ve started by creating in Inkscape (using the Cartesian grid tool) and printing a page with regularly spaced horizontal lines. I’ve printed it on an A3 paper, but in hindsight A4 would have worked just the same and is more available. I’ve placed the paper on the floor, added some simple lightning and and placed a paperclip in the middle just to make focusing easier.
Next I’ve took several shoots at each focal length, having the printed lines parallel to the long side of the image. To make sure I’m covering the entire zoom range consistently, I’ve used my set_zoom script. I’ve taken calibration shoots in 0 (minimal focal lenght), 5, 10, 20, … ,100, 110 and 111 (maximal focal length) zoom steps. After taking all the shots, I’ve copied the DNGs (CHDK saved RAWs as DNGs) from the camera, and used exiftool to sort them into directories by focal length:
$ exiftool '-directory<${FocalLength;}' *.DNG
As we will use Hugin, for the calibration and Hugin does not support DNGs, we have to convert them to TIFFs.
The -t 0 is used to remove the orientation info from the TIFFs. As I shoot the pictures with camera pointing straight down, that info is many times incorrect and messes up the calibration in Hugin.
Load each set of TIFFs of a given focal length into Hugin Lens Calibration GUI. Make sure the focal length and focal length multiplier are correct (for some reason Hugin was off by 0.01-0.02 sometimes), and use the “Find lines” button to automatically detect straight lines. Next optimize the radial distortion parameters. Those a, b and c parameters are the numbers Lensfun needs.
Hugin Lens Calibration GUI, after line detection and optimization.
The next step is to create a Lensfun profile for the camera. The basic blocks are the <camera> tag and the <lens> tag. Both are mandatory, even thought the camera has a fixed lens. For each focal length we calibrated the lens for, we add a <distortion> tag, modifiying the focal and the a, b and c parameters according to the values from the calibration GUI.
The lens’ model name follows the Lensfun convention for CHDK. However, it makes auto-detection of the lens to fail in many application such as Darktable. You can workaround it by copying exactly the model name from the <camera> tag. The xml file itself should go in ~/.local/share/lensfun.
The two pictures in the beginning of the post show some examples of uncorrected and corrected photos using the newly acquired calibration data.
CHDK supports programmatically setting the zoom level on the camera. However, the exact range for the zoom and apparently the translation between the zoom level in CHDK to actual focal length is camera dependent.
In order to figure out the exact range for the zoom and the relation between the numerical zoom steps and the focal length, I’ve used the following Lua script to get the current zoom setting and set it to a certain value. I’ve also configured the display of the camera to show the exact focal length at all times.
-- Guy Rutenberg 2018-08-10
--
--[[
@title Get and Set Zoom
@chdk_version 1.3
@param z Zoom level
@default z 0
--]]
z_old = get_zoom()
print("Current zoom "..z_old)
print("Zooming to "..z)
set_zoom(z)
Next, I’ve set the zoom to multiple value, and plotted it against the actual focal length. The results are shown in the graph above.
The range for the zoom is 0 to 111, which matches focal length of 4.5 to 135mm. The change is pretty linear in two section: 0 to about 85 and again from 85 up to 111.
Adding Google Adsense ads to your your WordPress blog was a tedious task. Either you needed to manually modify your theme, or you had to use a plugin, such as Google’s own Adsense plugin. Even then, placements were limited and handling both mobile and desktop themes was complicated at best. Recently, two things have changed: Google retired the Adsense plugin and introduced Auto Ads.
At first, the situation seemed like it turned for the worse. Without the official plugin, you had to resort into using a third-party plugin or manually placing ads in your theme. But Auto ads made things much simpler. Instead of having to manually place your ads, you can let Google do it for you. It works great on both desktop and mobile theme.
The easiest way to enable Auto ads is using a child theme. First, you need to get the Auto ads ad code. Next, in your child theme’s functions.php add the following lines, making sure to replace the javascript snippet with your own one.
To set up Google Analytics tracking for WordPress you don’t need any third-party plugin. It can be easily done using a child theme. A child theme, is a code that modifies the current theme in a way that won’t interfere with future upgrades. To enable Google Analytics, start by creating a child theme using the official documentation.
Now, you need to get you Google Analytics tracking code. Over the years the tracking code had a few different versions. You should make sure you are getting the latest tracking code, which currently looks like:
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-380837-9');
</script>
To get the tracking code, follow the instructions on this page.
Now we add the tracking code to each page using the child theme. In your child theme’s directory, edit the functions.php file and add the following lines. Replace the tracking code with the one you acquired.
// Add Google Analytics tracking
function my_google_analytics_header() {
?>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-380837-9"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-380837-9');
</script>
<?php
}
add_action( 'wp_head', 'my_google_analytics_header');
This adds the tracking code to the <head> of every page.
rdiff-backup provides an easy way to maintain reverse-incremental backups of your data. Reverse incremental backups are different from normal incremental backups by synthetically updating the full backup and keeping reverse diffs of all the files changed. It is best illustrated by an example. Let’s consider backups taken on three consecutive days:
1. Full backup (1st day).
2. Full backup (2nd day), reverse-diff: 2nd -> 1st.
3. Full backup (3rd day), reverse diffs: 3rd -> 2nd, 2nd -> 1st.
Compare that with the regular incremental backup model which would be:
1. Full backup (1st day).
2. Diff: 2nd -> 1st, full backup (1st day).
3. Diffs: 3rd -> 2nd, 2nd -> 1st, full backup (1st day).
This especially makes purging old backups easier. Reverse incremental backups allows you to simply purge the reverse-diffs as they expire. This happens because newer backups never depend on older ones. In contrast, in the regular incremental model, each incremental backup depends on each prior backup in the chain, going back to the full backups. Thus, you can’t remove the full backup until all the incremental backups that depend on it expire as well. This means that most of the time you need to keep more than one full backups, which takes up precious disk space.
rdiff-backup has some disadvantages as well:
1. Backups are not encrypted, making it unsuitable as-is for remote backups.
2. Only the reverse-diffs are compressed.
The advantages of rdiff-bakcup make it suitable to create local Time Machine-like backups.
The following script, set via cron to run daily, can be used to take backups of your home directory:
In order to select only certain files and directories to backup.
The --exclude-if-present .nobackup allows you to easily add a .nobackup file to directories you wish to ignore. The --force argument when purging the old backups allows it to remove more than one expired backup in a single run.
Listing backup chains:
$ rdiff-backup -l ~/backups/rdiff-home/
Restoring files from the most recent backup is simple. Because rdiff-backup keeps the latest backup as a normal mirror on the disk, you can simply copy the file you need out of the backup directory. To restore older files: