How to Install and configure a Direct Connect Hub (PtokaX) on Ubuntu 16.04.

Background:

I wrote this documentation as the process serves as a good template for downloading, compiling from source, installing, configuring, and finally creating a systemd style script that will start a service at boot.

Process:

Download source from their website: http://www.ptokax.org/files/0.5.2.1-nix-src.tgz

wget http://www.ptokax.org/files/0.5.2.1-nix-src.tgz

Install the dependencies:

sudo apt install make g++ zlib1g-dev libtinyxml-dev liblua5.3-dev -y

Expand the archive and change into the directory:

tar -xf 0.5.2.1-nix-src.tgz;cd PtokaX

Compile the program (I’m compiling without database support):

make clean
make
sudo make install

Create a new system user to run the process:

sudo adduser --system --group --no-create-home --disabled-login ptokax

Create the directory in etc for the configuration files:

sudo mkdir /etc/ptokax

Run the initial config and configure according to your tastes, give the ptokax user access.

sudo PtokaX -m -c /etc/ptokax
sudo chown ptokax:ptokax -R /etc/ptokax/*

Create a new file in the directory: /lib/systemd/system/ called ptokax.service with the following in it:

[Unit]
Description=PtokaX Direct Connect Hub
After=network.target
#Requires=apache2.service

[Service]
Type=forking
ExecStart=/usr/local/bin/PtokaX -d -c /etc/ptokax
User=ptokax
Group=ptokax

[Install]
WantedBy=multi-user.target

Reload, enable and start the process.

sudo systemctl daemon-reload
sudo systemctl enable ptokax.service
sudo systemctl start ptokax.service

 

Test the connection:

Final Notes:

If you want to make configuration changes, stop the service first, then either run the Ptokax program as sudo with the -m -c /etc/ptokax flags to configure it, or manually edit it’s configuration files.

Further Reading:

http://wiki.ptokax.org/doku.php?id=guides:debian_bugbuntu
http://patrakov.blogspot.com.au/2011/01/writing-systemd-service-files.html

 

Enabling PCI passthrough of Hauppauge QuadHD PCIe TV Tuner Card with a Marvell 88SE9230 SATA controller

Background:

As the title suggests, this is a complex problem that I’ve had to work with. The goal has been to create a virtual machine running MythTV that can utilise the PCIe tuner card on the hypervisor.

The first step in the process was to compile and install the latest kernel image (at the time of writing this was 4.9.9). This was necessary as the kernel version that ships with Ubuntu 16.04 (version 4.4.0.xx)  does not have the most recent drivers that the tuner needs to function. This step I completed successfully and for more information, please see my previous posts.

Unfortunately, enabling iommu in the kernel activated a bug in the additional PCIe SATA card I have installed in the hypervisor that stopped the whole system from booting. More on that in a minute.

Affected Hardware:

Startech PEXSAT34RH 4-Port PCI Express 2.0 SATA Controller Card with a Marvell 88SE9230 chipset.
Hauppauge QuadHD PCIe TV Tuner Card.
Intel S1200SPL motherboard with a AXXRMM4LITE RMM4 module installed.

PCI devices identified as through lspci as:

05:00.0 Multimedia video controller: Conexant Systems, Inc. CX23887/8 PCIe Broadcast Audio and Video Decoder with 3D Comb (rev 04)
06:00.0 Multimedia video controller: Conexant Systems, Inc. CX23887/8 PCIe Broadcast Audio and Video Decoder with 3D Comb (rev 04)
02:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)

Partial resolution:

The first step was to enable iommu in the kernel without breaking the SATA controller card. the solution was to the enable iommu and set it to passthrough mode. This can be achieved on a Ubuntu system by editing /etc/default/grub and adding intel_iommu=on iommu=pt to the linux default settings: For my system it now looks like this:

GRUB_CMDLINE_LINUX_DEFAULT="nomodeset intel_iommu=on iommu=pt"

At the command line, run sudo update-grub and reboot.

The rest of the process, that includes adding the hardware to the VM host and enabling the pci_stub kernel module can be found in previous posts on my blog.

The only difficulties I encountered, and didn’t mention in my last blog post, was ensuring that the PCIe devices do not share IRQs. To check, I cross-referenced the output of:
:$ find /sys/kernel/iommu_groups/ -type l
with
:$ lscpi
I could confirm that the DVB-T tuner card had two interrupts, and did not share them with any other hardware device. More on that here.

Continuing problems:

After finally managing to get the PCI pass through function working which I verified by checking dmesg on the VM. I launched mythtv-setup and configured the tuner cards. MythTV successfully added them and I could add them to a video source. Unfortunately the system crashed when it tried to do an initial tune.
The console on the KVM host output the error:
vfio-pci pcie bus error severity=(uncorrected _Fatal), type=unaccessible,id=500(unregistered Agent ID)
And the console on the virtual machine output the error:
mpeg risc op code
and then promptly crashed.

Thankfully I have a backup single USB tuner, however it seems that the quest continues to get the tuner working properly.

Further reading:

IOMMU Bug in the 88SE9230 Chipset:
https://lime-technology.com/forum/index.php?topic=54410.0
http://lime-technology.com/forum/index.php?topic=40683
https://lime-technology.com/forum/index.php?topic=33511.0
Product Website

PCI Passthrough:
http://vfio.blogspot.com.au/2015/05/vfio-gpu-how-to-series-part-3-host.html

Hauppauge QuadHD PCIe TV Tuner Card:
LinuxTV Page
Product Website

 

How to Fix the Intel RMM4 No Signal on Linux

After installing the Remote Management Module AXXRMM4LITE into my Intel S1200SPL I was disappointed by being unable to see any output when using the Java applet.

Now, originally, I had noticed that after installation of the OS, there was no output to the monitor once Linux had booted. I got around this by installing a spare NVS300 graphic card and telling the BIOS to use it as the primary output.

Sensing that the two were related I removed the graphics card and told the BIOS to use the onboard graphics as the primary display. I still had the ‘no signal’ error in the java applet, but at least the hardware was configured correctly.  After doing some reading and searching, I was able to fix the issue by editing kernel boot parameters. In /etc/default/grub I added the option nomodeset to the GRUB_CMDLINE_LINUX_DEFAULT=””. such that it read:

GRUB_CMDLINE_LINUX_DEFAULT=”nomodeset”

Then I updated grub with:
$: sudo update-grub
$: sudo shutdown -hr now

And after rebooting, I was able to remotely see the console.

Further Reading:

https://community.linuxmint.com/tutorial/view/842

How to Compile the Linux Kernel from Source on Ubuntu 16.04 LTS

Background/Problem:

My KVM host, after a recent upgrade (see posts below) cannot start with the kernel option iommu=on enabled. Technically it can, however the system will not start due to a driver/bug issue with an additional SATA card I have installed:
:$ lspci

02:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)

Disks simply do not register when using iommu. The Bugzilla report can be found here and more information can be had here. The references are old, so my hope is that it has been patched in the latest kernel images.

Furthermore, I need the latest kernel to use the Quad tuner PCI-E card I have:
:$ lspci

05:00.0 Multimedia video controller: Conexant Systems, Inc. CX23887/8 PCIe Broadcast Audio and Video Decoder with 3D Comb (rev 04)

The quad tuner needs kernel 4.8 to run. More information here. So fundamentally, I need to compile the latest stable kernel image to get the full use of my system and then pass the PCIE tuner card through to my Media VM.

The Process:

I recommend doing this I have done, inside a reasonably powered Virtual machine. I’ve gone back through things and corrected my instructions when I’ve run into problems. This process will generate a Debian package that you can install on any Debian based OS (such as Ubuntu).

Problems:

  1. Not giving enough RAM, CPU and disk space to the VM to compile (at all) or in a timely manner.
    1. I’ve given my VM 4 cores, 4GB RAM and an ‘external’ hard drive of 30GB to use to compile the kernel.
  2. Utilize all the cores. add the line: CONCURRENCY_LEVEL= 4 to /etc/kernel-pkg.conf to use all 4 cores when compiling (once the package is installed, see below)
  3. Not having some essential packages installed that caused the process to stop. such as libssl-dev.

Using all the cores makes it go much faster!

Steps:

At the command prompt, install all the packages you need to compile the kernel:
:$ sudo apt-get install fakeroot kernel-package gcc build-essential libncurses5-dev qt5-default libssl-dev

Download, to a disk that has ~20GB free, the latest stable kernel version. At the time of writing this was 4.9.9. Extract it and cd into the directory
:$ wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.9.9.tar.xz
:$ tar -xf linux-4.9.9.tar.xz
:$ cd linux.-4.9.9

Assuming you’re running this in a desktop Linux environment, run make xconfig, alternatively, if you’re using a terminal server, make menuconfig will do.

The default settings should do in most instances.

Save and close the configuration. Make the build environment clean, then begin the compile process:
:$ make-kpkg clean
:$ fakeroot make-kpkg –initrd –revision=4.9.9.linux kernel_image kernel_headers

You can save time by compiling a kernel with only the hardware that you have installed. Do this by deselecting them in the xconfig/menuconfig. The downside is that if you add new hardware, you’ll need to recompile the kernel.

For an explanation on the above fakeroot command, please see this Debian manual page. You should now have a custom kernel image compiling.

Once it’s completed, cd to the upper directory, and install the kernel:
:$ cd ..
:$ sudo dpkg -i linux-image-4.9.9_4.9.9.linux_amd64.deb linux-headers-4.9.9.9_4.9.9.9.linux_amd64.deb
:$ sudo shutdown -hr now

After restarting the VM, you can check the currently running version of the kernel by typing at the command prompt:

Additional extra step:

Prove to yourself that you’ve created a usable package by spinning up a shiny new VM, sftp the debian package to it, then install and reboot.

Further reading:

https://www.cyberciti.biz/faq/debian-ubuntu-building-installing-a-custom-linux-kernel/

Rsync logging

The Problem:

My media server runs a rsync job via ssh to another server every night between 2230 and 0600. Every time the scripts runs, it generates a new time-stamped log file. Eventually there are quite a few log files that require manual cleanup. I want to automate this process and cleanup the log file generation.

The log files

The bash script is kept in my home directory (for easy, unscheduled syncs) and is executed using cron.

My crontab file contains:

30 22 * * * /home/wargus/rsync.sh >/dev/null 2>&1
0 6 * * * killall rsync >/dev/null 2>&1

Contents of script:

#!/bin/bash
rsync –bwlimit=450 –delete –protect-args –size-only –copy-dirlinks –log-file=/var/log/rsync/log.`date +”%Y%m%d_%H%M%S”` -avPe ssh “/path/to/files/” “user@host:/path/to/files/”

I won’t go into the details of the rsync command above, suffice to say it works and limits bandwidth to something reasonable for a slow, home ADSL connection. I expect that will change when NBN will finally (if ever) arrive at my off-site location. For this to work, I did have to generate ssh keys to allow the job to execute without user intervention.

The Solution:

 

The addition of two line lines to my rsync.sh above the rsync command script did the trick:

find /var/log/rsync/ -mtime +8 |xargs -I % sh -c ‘rm -f %’;
find /var/log/rsync/log.* |xargs -I % sh -c ‘tar -rf /var/log/rsync/rsync.1.tar %; rm -f %’;

The first line finds anything older than 8 days, then using the list output by find input, deletes all the files. On first run it deleted all my older log files, but going forward, it will remove the archive after 8 days.

The second line fill find every log file in that directory and appends it to an archive, if it exists, or creates the archive first if it does not.

Now, when the script executes, I have no problem knowing what the newest log is and in case I want to check older ones, I can open the archive and have a look.

Rebuilding the Hypervisor with new hardware

Good news!
I’ve spend the day rebuilding the server. I’ve completely overhauled the system, replacing what was an aging AMD octacore with a new Intel server.

New specs are:
CPU: Intel E3-1245v5 3.5GHz 4 Core, 8 threads.
Motherboard: INTEL S1200SPL
RAM: 32GB (4x8GB) Crucial 2400Mhz ECC
Other components: Nvidia NVS300 gfx card, quad DVB-T tuner, additional SATA raid card for the 11 hard disks in the two raid arrays.
All houses in a Cooler Master Cosmos II Full Tower.

All in all, the migration has been very smooth. I’ve been able to get all the VMs up and running again without much fuss. I didn’t realise that remote console through the Intel BMC web console was not possible without an additional component, so I’ll be ordering an Intel remote management component (AXXRMM4LITE2) very soon.

PCI and File System Pass-through on KVM

Putting aside Landscape for a moment. That, for the record, I was able to get up and running by following the documentation. The server generated snake-oil SSL certificates and enabled SSL by default which would mean quite a lot of re-configuring to make it work behind the reverse proxy. More problematic was that the other machines, when trying to connect to the landscape server, would reject the connection due to the self-signed certificate. The mechanisms for Landscape aren’t clear, so at this point I’m unsure if this would be a problem if I disabled SSL on apache only (thereby allowing the reverse proxy to handle SSL – and have all the landscape client connect via it) or if the landscape service itself also needed the SSL certificates. If that’s the case then the challenge will be to have the current SSL certs copied to the landscape server when they’re renewed (or nightly via rsync and cron).

So as I said, I’m putting that aside for the moment to focus on changing how my VMs on my KVM server access local files and migrating the last few services on the KVM host itself to a VM. Currently that includes SAMBA/SMB file shares. MythTV and Plex. 

The biggest hurdle is to move mythtv to a VM as it will require PCI passthrough for the TV tuner card. This is possible and the documentation makes it clear how to achieve this, however when I initialled passed the TV tuner card to the VM, the VM refuses to start. Similarly, USB devices are not being passed through.

After researching, there appears to be a bug in apparmor that stops USB from being passed through. Solution available here.

The PCI problem was a little more complex. After checking the output of the error logs for KVM and dmesg and googling what I could. the problem ended up being that PCI cards and devices that share a bus, therefore share the interrupts and have to all be added to the virtual machine. The system cannot differentiate between them. After checking the output of lspci and comparing that to the list of devices in /sys/kernel/iommu_groups/11 (group 11 was the where all the devices were that I needed to pass through). I added all the components of the TV tuner card and a IEEE1394 port on the mother board (that I have never used) to the VM. To make my life easy and ensure I didn’t make mistakes I wrote it out as a script, based on the documentation here.

#!/bin/bash

echo “14f1 8800” > /sys/bus/pci/drivers/pci-stub/new_id
echo “0000:05:06.0” > /sys/bus/pci/devices/0000:05:06.0/driver/unbind
echo “0000:05:06.0” > /sys/bus/pci/drivers/pci-stub/bind

echo “14f1 8802” > /sys/bus/pci/drivers/pci-stub/new_id
echo “0000:05:06.2” > /sys/bus/pci/devices/0000:05:06.2/driver/unbind
echo “0000:05:06.2” > /sys/bus/pci/drivers/pci-stub/bind

echo “14f1 8804” > /sys/bus/pci/drivers/pci-stub/new_id
echo “0000:05:06.4” > /sys/bus/pci/devices/0000:05:06.2/driver/unbind
echo “0000:05:06.4” > /sys/bus/pci/drivers/pci-stub/bind

echo “1106 3044” > /sys/bus/pci/drivers/pci-stub/new_id
echo “0000:05:0e.0” > /sys/bus/pci/devices/0000:05:0e.0/driver/unbind
echo “0000:05:0e.0” > /sys/bus/pci/drivers/pci-stub/bind

executing the script and then adding all the above PCI devices did the trick. The VM now starts and lists all the PCI devices:

wargus@media:~$ lspci

00:08.0 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05)
00:09.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05)
00:0a.0 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05)
00:0b.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0)

A quick probe of lsmod also shows that the v4l2 drivers are loaded as are the drivers for the TV tuner card (cx8800). Changes are persistent after a restart of the VM host, too.

Setting up Plex media server, mythTV and samba shouldn’t be a challenge from this point.

This leaves the last challenge – setting up file system pass-through on the VMs. The documentation, here and here, perhaps wasn’t as helpful as it could be. I tested out FS pass-through on my mail server first, as it also hosted my nextcloud installation. I wanted to move the data files that constitute my nextcloud storage to the much roomier RAID+LVM on the KVM host itself.

There was suffice to say, a lot of flaffing about before I managed to get it to work. The screenshot below shows the configuration in virt-manager.

What it does not show are the file permissions on the KVM host. The directory has the permissions of:
ls -l
drwxr-xr-x 3 libvirt-qemu kvm 18 Jan 1 09:44 nextcloud

This is because in ‘mapped’ mode “files are created with Qemu user credentials and the client-user’s credentials are saved in extended attributes.” Whereby client-user is referring to users on the VM. Once mounted on the guest OS with:

sudo mount -t 9p -o trans=virtio,version=9p2000.L /nextcloud /nextcloud/

I was about to copy in the data directory (when apache2 was off) preserving the ownership and permissions of the files.

On the host OS, the files all appear to be owned by libvert-qemu and kvm, on the guest OS they all appear to be owned by the www-data user. The final step is of course to make the changes persistant by editing the /ect/fstab file and adding the in the line:
/nextcloud /nextcloud 9p trans=virtio,version=9p2000.L 0 0

Fixing little things

This was a weekend of Christmas/fixing niggling problems on my systems.

  1. Migrate to NextCloud from Owncloud.
    1. Easy – thanks to documentation and a blog post.
  2. Fix accessibility problems with the calendar plugin in nextcloud.
    1. This was primarly caused by my ignorance – when adding calendars in, say, Thunderbird you need to be very specific on the URL – nextcloud does not make it clear what the url is for specific calendar unless you go looking for it. Simply adding the primary address (https://www.warbel.net/owncloud/remote.php/dav/) will not work.
  3. Fix the zabbix server, as it wasn’t starting with the system.
    1. Checking the service status show the issue here. It wasn’t set to start automatically. Fixed with sudo systemctl enable zabbix-server.service.

As far as getting landscape up and running – this seems a little problematic. After spinning up an VM and using the documentation I can’t seem to add machines to the landscape server. This is because it uses self-signed ssl certificates. The solution is easy enough – provide the letsencrypt certificates. However because it sits behind a reverse proxy the website will need configured to not use SSL but as far as I can tell so far, the landscape service itself needs the ssl certificate. This can be fixed by using rsync and cron to move the necessary files, but it’s going to be a pain. We’ll see.

Configuring Mattermost on Ubuntu 16.04 with Apache, Mysql and Let’s Encrypt

First some background:

I’ve become increasingly aware of ‘free’ services like Slack (but facebook and Google also fit into this category). While they do offer free convenient services, the true cost is to your privacy and security. Having some technical knowledge means that I can have the convenience and features of their platforms without having to sacrifice any of the above – it’s the best of both worlds!

My notes below will only cover some of the more difficult aspects of configuring apache and how I circumvented let’s encrypt’s process of creating and accessing hidden directories. The Mattermost documentation is your friend too; their setup guides were accurate and effective. To anyone reading this: I strongly suggest setting up a test server first before attempting to create a production system from scratch.

Setting up Mattermost:

As stated above, documentation is your friend. I set up the system on a test VM and was able to get it running with minimal fuss. I created a mysql user and database for the installation on my production web server using phpmyadmin and from there, the rest of the configuration was from within Mattermost itself. Mattermost encourage you to use Nginx and PostgreSQL. To configure MySQL in Mattermost, no changes to the SQLSetting are needed except the  DataSource directive which you will need to modify to suit your username/password/database that you setup.

Once I was satisfied with the setup I migrated the directory structure from the testbed to the production server and setup the init scripts as per the mattermost documentation so it runs as a service under it’s own account. I will stress that if you do that, be sure to check the data directory directive so that the right locations are accessible. If you have any trouble with Mattermost the logs are a good place to start looking for the problem. 😉

Setting up Let’s Encrypt:

This is really a two stage process. The first stage is to setup a sub-domain using your DNS provider. I use NoIP, as I can use their client to update my dynamic IP address if and when my internet connection drops out.

As Mattermost runs on a high port number and apache has not been configured as a reverse proxy just yet. I needed to run lets encrypt in standalone mode. In this mode, letsencrypt acts as its own http server in order to verify you have control over the domains you’re trying to create ssl certificates for. The commands I ran looked like this:

sudo service apache2 stop

sudo letsencrypt certonly –standalone -d www.warbel.net -d bel.warbel.net -d blog.warbel.net -d mattermost.warbel.net

sudo service apache2 start

Let’s encrypt recognized that I needed to add the new domain mattermost to my list of sites and updated the certificate accordingly.

Configuring Apache:

Originally I had intended to setup Mattermost as a subdirectory on my primary domain which was in keeping with my previous projects. Unfortunately that seemed impossible. In the end it was easier to setup a sub-domain and then configure apache with a new site. I had to do some serious googling to find a semi-working config. Mattermost uses web-sockets and application program interfaces (APIs) which do not play well with reverse proxies out of the box. Furthermore, as Let’s Encrypt had already reconfigured components of Apache, I had to modify what I found to match with my pre-existing sites.

I created a new site in /etc/apache2/sites-available/ called mattermost.warbel.net.conf and working off this configuration file as an example created the below:

<VirtualHost mattermost.warbel.net:80>
ServerName mattermost.warbel.net
ServerAdmin xxxx@warbel.net

ErrorLog ${APACHE_LOG_DIR}/mattermost-error.log
CustomLog ${APACHE_LOG_DIR}/mattermost-access.log combined

# Enforce HTTPS:
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost mattermost.warbel.net:443>
SSLEngine on
ServerName mattermost.warbel.net
ServerAdmin xxx@warbel.net

ErrorLog ${APACHE_LOG_DIR}/mattermost-error.log
CustomLog ${APACHE_LOG_DIR}mattermost-access.log combined

RewriteEngine On
RewriteCond %{REQUEST_URI} ^/api/v1/websocket [NC,OR]
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC,OR]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* ws://127.0.0.1:8065%{REQUEST_URI} [P,QSA,L]
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule .* http://127.0.0.1:8065%{REQUEST_URI} [P,QSA,L]
RequestHeader set X-Forwarded-Proto “https”

<Location /api/v1/websocket>
Require all granted
ProxyPassReverse ws://127.0.0.1:8065/api/v1/websocket
ProxyPassReverseCookieDomain 127.0.0.1 mattermost.warbel.net
</Location>
<Location />
Require all granted
ProxyPassReverse https://127.0.0.1:8065/
ProxyPassReverseCookieDomain 127.0.0.1 mattermost.warbel.net
</Location>

ProxyPreserveHost On
ProxyRequests Off
SSLCertificateFile /etc/letsencrypt/live/www.warbel.net/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.warbel.net/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>

The main difference being that the ssl virtualhost needed to be contained within the ssl module configuration. I’ll also point out that there was a typo in the last proxypassreverse directive- the url was missing the https that stopped the websites from pushing new chat messages automagically to clients connected to the server.

Enable the site with:

sudo a2ensite mattermost.warbel.net

and reload or restart the server – you should have a working Mattermost server.