Setting Up a Swap File on BTRFS

With the release of btrfs-progs 6.1 (available in Debian Bookworm), you can create a swap file using the btrfs utility. It will take care of both preallocating the file and marking it as NODATACOW. As BTRFS can’t snapshot a subvolume that contains an active swap file, we will create a new subvolume for the swap file to reside in.

$ sudo btrfs subvolume create /swap
$ sudo btrfs filesystem mkswapfile --size 4g /swap/swapfile
$ sudo swapon /swap/swapfile

To auto-activate the swap file at boot, add the following line to /etc/fstab:

/swap/swapfile none swap defaults 0 0

Encrypted swap file

My entire root filesystem is encrypted, but having unencrypted swap can still lead to sensitive data being inadvertently exposed. The solution is to encrypt the swap file using random keys at boot. Add the following line to /etc/crypttab:

swap /swap/swapfile  /dev/urandom swap,cipher=aes-xts-plain64

And change the swap file location in /etc/fstab to point to /dev/mapper/swap like this:

/dev/mapper/swap none swap defaults 0 0

The new swap file will automatically start upon boot. To start it immediately, run:

$ sudo systemctl daemon-reload
$ sudo systemctl start systemd-cryptsetup@swap.service
$ sudo swapon /dev/mapper/swap

PSI-Notify

Adding a small amount of swap can significantly help in preventing the system from running out of memory. psi-notify can alert you when your running low on memory.

$ sudo apt install psi-notify
$ systemctl --user start psi-notify

Add the following line to ~/.config/psi-notify:

threshold memory some avg10 2.00

You may need to adjust the threshold to ensure it triggers at the right time for your needs. To test memory pressure, you can use the following command:

$ </dev/zero head -c 20G | tail

Fixing the Out of Memory Error When Installing Interactive Brokers TWS on Linux

When installing Interactive Brokers TWS on Linux, I encountered the following error after the installer unpacked the Java Runtime Environment (JRE):

library initialization failed - unable to allocate file descriptor table - out of memoryAborted

The solution was to increase the open file limit before running the installer:

$ ulimit -n 10000
$ ./tws-stable-linux-x64.sh

Convert PKCS#7 Certificate Chain to PEM

I’m trying to use certificates issued by Microsoft Active Directory Certificate Services (AD CS) to connect to an 802.1x protected network. NetworkManager expects certificates in PEM format, but AD CS issues them in PKCS#7 format (with a .p7b extension). You can use OpenSSL to convert the certificates:

openssl pkcs7 -print_certs -inform DER -in certnew.p7b -out cert-chain.pem

In this command, certnew.p7b is the PKCS#7 encoded certificate chain you received from AD CS, and cert-chain.pem is the desired output file.

Running Concept2 Utility in a VM

II recently found myself needing to transfer some old workout data from my Concept2 PM3 and its associated Logcard. However, as I didn’t have easy access to a Windows machine, and Concept2 only provides their utility software for Windows and Mac, I was compelled to run it in a Windows VM.

Upon connecting the PM3 to my computer and redirecting the USB device to the VM, I encountered an issue: the Concept2 Utility failed to recognize the connected PM3. In an attempt to resolve this, I downgraded the Concept2 Utility to an older version, 6.54. This version did recognize the PM3, but it still failed to recognize the Logcard.

The solution I found was to add the PM3 as a USB host device, rather than using USB redirection. This can be accomplished via the VM’s hardware detail page by selecting Add Hardware -> USB Host Device, or by using the following XML configuration:

<hostdev mode="subsystem" type="usb" managed="yes">
  <source>
    <vendor id="0x17a4"/>
    <product id="0x0001"/>
  </source>
  <address type="usb" bus="0" port="3"/>
</hostdev>

The Vendor ID/Product ID (VID/PID) shown above corresponds to the PM3’s VID/PID. If you’re using a different monitor, you may need to adjust these values accordingly.

For reference, here is an example of how the PM3’s VID/PID appears:

Bus 003 Device 004: ID 17a4:0001 Concept2 Performance Monitor 3

Additional details about the setup include:

  • Windows 10 VM
  • QEMU/KVM virtualization via virt-manager.
  • Concept2 Utility verison 7.14

Regenerate Dracut initramfs images from a live USB

Thanks to an incompatability between dracut and systemd version 256, I was left with an unbootable system. I booted a live USB system, downgraded systemd back to 255, but I had to regenerate the initramfs images to be compatible with my old systemd.

While using systemd-nspawn is convenient for modifying the system from a live usb, generating dracut initramfs images through it, resulted in missing hardware support (for example, no nvme module) due to the abstracted hardware in the container. The solution is to revert to the traditional chroot and generate the images from there:

$ sudo /usr/lib/systemd/systemd-cryptsetup attach root /dev/nvme0n1p3
$ sudo mount /dev/mapper/root /mnt/
$ sudo mount /dev/nvme0n1p2 /mnt/boot/
$ sudo mount --bind /dev/ /mnt/dev/
$ sudo mount --bind /proc/ /mnt/proc/
$ sudo mount --bind /sys/ /mnt/sys/
$ sudo chroot /mnt/
# dracut -f --regenerate-all

Automating DNS Configurations for F5 VPN Tunnel using Systemd-resolved and NetworkManager-dispatcher

F5 VPN does not play well with split DNS configuration using systemd-resolved because it insists on trying to rewrite /etc/resolv.conf. The workaround is to make resolv.conf immutable, and configure the DNS settings for the tunnel manually. systemd-resolved does not have a mechanism for persistant per-interface configuration, and it relies on NetworkManager to configure each connection correctly. F5 VPN is not compatible with NetworkManager, and does not make it easy to configure it this way.

NetworkManager-dispatcher allows you to run scripts based on network events. In our case, we will use it to automatically add DNS configurations when the F5 VPN tunnel tun0 is up, and thus provide persistent configuration.

Here is the script:

#!/bin/bash

INTERFACE=$1
STATUS=$2

case "$STATUS" in
    'up')
        if [ "$INTERFACE" = "tun0" ]; then
            # Add your search domains here
            SEARCH_DOMAINS="~example.corp ~example.local"

            resolvectl domain "$INTERFACE" $SEARCH_DOMAINS
            resolvectl dns $INTERFACE 192.168.100.20 192.168.100.22
            resolvectl dnsovertls tun0 no
        fi
        ;;
esac

The script checks if the interface is tun0 and if the current action is up. If so, it uses resolvectl to configure search domains and local DNS servers. Lastly, DNS over TLS is disabled, as the corporate DNS servers do not support them.

To make this script work, install in the /etc/NetworkManager/dispatcher.d/ directory with the name f5-vpn. Make sure it’s executable and only writable by root. NetworkManager-dispatcher will run this script whenever a network interface goes up, automatically setting the DNS configurations for F5 VPN tunnel.

Nginx: Block access to sensitive files

As more websites and applications are being hosted on the web, the necessity for securing these resources is skyrocketing. Among various web servers, Nginx has gained substantial popularity due to its performance, flexibility, and stability. Despite its many merits, securing your applications can always be a tricky business. But fret not, securing sensitive files in Nginx is achievable by configuring the Nginx server to block attempts directed at accessing specific files or file types.

This blog post will guide you through a way to enhance the Nginx server’s security by blocking access to sensitive files. As seen in the code snippet in the block_bad.conf file, we will deny access to all dotfiles, files ending with ~ or .bak, and the dump.sql file which often contains database dumps.

Start by saving the following code snippet in the /etc/nginx/snippets/block_bad.conf:

# deny access to all dotfiles
location ~ /\. {
	deny all;
	log_not_found off;
	access_log off;
	return 404;
}

# Allow access to the ".well-known" directory
location ^~ /.well-known {
        allow all;
}

# deny access to files ending with ~ or .bak
location ~ ~$ {
	deny all;
	log_not_found off;
	access_log off;
	return 404;
}
location ~ \.bak$ {
	deny all;
	log_not_found off;
	access_log off;
	return 404;
}

location ~ /dump.sql$ {
	deny all;
	log_not_found off;
	access_log off;
	return 404;
}

By using the deny all; directive within the location blocks, Nginx will disallow access to the specified locations. The access_log off; and log_not_found off; directives prevent logging access to these blocked file types, and the return 404; directive makes Nginx return a 404 error when an attempt is made to access these files.

Specifically, the location ~ /\. clause will block access to all dotfiles. These files usually store configuration for individual programs, revealing them might cause security vulnerabilities. The next location block with the pattern ~ ^/.well-known overrides the previous one by specifically allowing access to URIs startings with .well-known (which is used by certbot).

The next set of directives beginning with location ~ ~$ and location ~ \.bak$ blocks access to files ending with ~ or .bak. These files are often temporary files or backup copies which may accidentally leak sensitive information if accessed by malicious actors.

Lastly, the location ~ /dump.sql$ rule denies access to dump.sql files. These files may contain database dumps, which can expose sensitive database information.

After saving this file, you need to include it in your server block configuration using the include directive. Open the Nginx server block configuration. Then, to load the block_bad.conf, add include snippets/block_bad.conf; in each server block that you want to apply these security rules. Restart the Nginx service after including this rule to apply the changes.

By taking the steps outlined in this guide, you can enhance the security of your Nginx server by preventing unauthorized access to sensitive files and thus reducing potentials for information breach. Remember, securing your servers is a continual process, and consistently updating your configurations to counter evolving threats is a good practice.

Changing the webcam’s default power-line frequency

The default powerline frequency for the Logitech C270 webcam is 60Hz, which causes flickering. It can be changed manually via v4l2-ctl or cameractrls, but the change isn’t permanent. To persist the change, we need to create a udev rule. Put the following lines in /etc/udev/rules.d/99-logitech-default-powerline.rules:

# 046d:0825 Logitech, Inc. Webcam C270
SUBSYSTEM=="video4linux", KERNEL=="video[0-9]*", ATTR{index}=="0", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="0825", RUN+="/usr/bin/v4l2-ctl -d $devnode --set-ctrl=power_line_frequency=1"

power_line_frequency value 1 corresponds to 50Hz, 2 to 60Hz and 0 disables the anti-flickering algorithm.

How to Display Battery Percentage for Bluetooth Headphones in GNOME

By default, the Power tab in GNOME’s Settings does not show the battery percentage for Bluetooth headphones like the Sony WH-1000XM3. However, you can enable this feature by activating the DBUS interface of Bluez, the Linux Bluetooth protocol stack. The DBUS interface is hidden behind the --experimental flag for the Bluez service. To enable it, follow these steps:

  1. Create an override file for the bluetooth service:
$ sudo systemctl edit bluetooth

This command will create the file /etc/systemd/system/bluetooth.service.d/override.conf.

  1. Add the following lines to the file:
[Service]
ExecStart=
ExecStart=/usr/libexec/bluetooth/bluetoothd --experimental

Note that both ExecStart= lines are required.

  1. Restart the Bluetooth service.
Battery percentage for Sony WH-1000XM3 under Settings->Power

Install GNOME 44 on Debian Unstable

GNOME 44 recently got released. Because Debian Bookworm is undergoing a freeze, GNOME 44 is not yet available in Debian Unstable. It’s very similar to the GNOME 40 situation 2 years ago. While the wait for the freeze to be over can take a long time, Debian already has (most) of the updated pacakges in the experimental, and we can update to GNOME 44 through it:

$ sudo apt install -t experimental baobab eog evince gdm3 gjs gnome-backgrounds gnome-calculator gnome-characters gnome-contacts gnome-control-center gnome-disk-utility gnome-font-viewer gnome-keyring gnome-logs gnome-menus gnome-online-accounts gnome-remote-desktop gnome-session gnome-settings-daemon gnome-shell gnome-shell-extensions gnome-software gnome-system-monitor gnome-text-editor gnome-user-docs mutter gnome-desktop3-data
$ sudo apt-mark auto baobab eog evince gdm3 gjs gnome-backgrounds gnome-calculator gnome-characters gnome-contacts gnome-control-center gnome-disk-utility gnome-font-viewer gnome-keyring gnome-logs gnome-menus gnome-online-accounts gnome-remote-desktop gnome-session gnome-settings-daemon gnome-shell gnome-shell-extensions gnome-software gnome-system-monitor gnome-text-editor gnome-user-docs mutter gnome-desktop3-data