Python Scripts for interacting with phpIPAM and PowerDNS Admin

I’ve spent some time this weekend writing some scripts in Python that will function as a preliminary building block for future automation – simplifying the creation of VMs in my network.

I won’t go into too much more detail here, but I’ve published them on github. Future updates to the scripts will make them work smoothly with my Ansible build processes, but for now, they’re OK.

https://github.com/wargus85/phpIPAM_PDNS_Scripts 

Projects and Upgrades

‘If you wish to make an apple pie from scratch, you must first invent the universe’

– Carl Sagan

Since my last post, I’ve completed a major overhaul of my home network and server infrastructure. Fundamentally this began with:

  • Purchasing a new single rack unit QNAP, 2.5G switch and 2.5G NICs for my hypervisors;
  • Purchasing another HP microserver (for a grand total of 3 hypervisors, and 1 off site server);
  • Configuring the QNAP as an NFS server;
  • Configuring networking for the QNAP and servers including VLANS and IPv6 (via SLAAC);
  • Migrating all my systems from Linux Containers to VMs, using the QNAP as the storage backend.

All of which has given me much needed flexibility – I can now migrate live VMs between hosts. In the process I’ve also mapped out all new IP ranges and VLANS for my infrastructure. This is managed by phpIPAM integrated into PowerDNS. The hypervisors are configured with separate bridges for the guest OSs and their subnets. My EdgeRouter handles DHCP and IPv6 prefix delegation to the VLANS/subnets. Having done all of this, I now feel as though I’ve really unleashed the full power of my infrastructure. 

With the above compete, I then finally removed my windows server VMs, and now use Samba as an AD server which is federated into keycloak (more on that below). I had wanted to move away from AD, but the alternative: a pure openLDAP or Apache Directory system was going to be more of a learning curve and provide less out-of-the-box functionality and less windows desktop integration than I needed. That said, I’m very happy to have moved to an entirely open-source setup at home.

With the basics now in place, I was able to configure some new apps and systems including:

  • A new Samba file server for my wife and I to store files from our windows desktops. The system is joined to the above domain controller and I created share drives with NTFS permissions. I’ve transferred all the old files to the new directories. Access is controlled by AD security groups.
  • A Wireguard VPN Server – using PiVPN. This worked great on my phone/tablet, however I did need to fix the client configuration on my Kubuntu Laptop. Fundamentally resolvconf isn’t compatible with the way wireguard does things. This was fixed with the below in the wireguard configuration:
 # DNS = IP_of_DNS_Server1, IP_of_DNS_Server2
PostUp = resolvectl dns %i IP_of_DNS_Server1 IP_of_DNS_Server2; resolvectl domain %i warbel.net \~. 
  • I’m also happy to note that Wireguard works with IPv6 – and all clients are reporting using the IPv6 address of the VPN server when I poll https://icanhazip.com. PiVPN is a great solution for a small number of clients but does lack any LDAP/AD SSO integrations.
  • Keycloak: I’ve been keen to learn how to SSO works and was introduced to the Keycloak project by a former colleague. I’ve since learned a great deal, and have setup SSO functionality across all my web applications, this includes Nextcloud, this blog, gitea, my IPAM and DNS solutions, MediaWiki, and unintelligent applications that sit behind the apache reverse proxy. Keycloak is also federated with AD. I’m particularly proud of setting up apache to not only use keycloak to auth users, but only grant access to users who are a member of specific groups.
  • Setup a password self-service web app – https://ltb-project.org/ meaning my users (read: my wife) can reset her (now federated) password.

Learning PUML for Project Design and Delivery

I stumbled across the “C4 Model for visualising software architecture” via a blogpost I saw on HackerNews back before Christmas and had been waiting for an opportunity to try out the model for myself. I’ve always hated making system design diagrams, probably because I’ve always found it:

  • time consuming;
  • impossible to meaningfully track changes.
  • difficult because I lacked appropriate tooling and an methodological framework and;
  • boring.

I liked what I saw in the above links because it addressed the first three problems I had: I could cut down how long it took to make diagrams by writing PUML code, track changes in a git repository, and configure VSCode with plugins to generate the images and relationships programmatically for usage in documentation/wikis. And because it meant I could write my diagrams in code, it fixed the 4th problem and now, I actually enjoy making diagrams.

I now use the Vscode PlantUML Plugin which on linux also required the installation of graphviz:

sudo apt install graphviz -y

And after some basic configuration to get the plugin working it becomes quite easy to work on creating diagrams.

@startuml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Container.puml

Person(personAlias, "Label", "Optional Description")
Container(containerAlias, "Label", "Technology", "Optional Description")
System(systemAlias, "Label", "Optional Description")

Rel(personAlias, containerAlias, "Label", "Optional Technology")
@enduml 

Screenshot of VScode:

VSCode and rendering plugin

Obviously I cannot include any of the diagrams I’ve made for work here on my blog, so I’ve used code and examples from the Git repository for the C4 Model instead. The repository includes everything you need to generate visually pleasing models. I found that after a few days of using the examples in the repo, I’ve been able to make quite useful diagrams for our new file transfer system I’ve been designing at work that  illustrates the system we’re designing and building. The C4 Model repository also makes use of another repository of images/sprites which I’ve utilised to add some flair and comprehensibility to the diagrams.

Python Scripts for Interacting with LDAP

This is more or less just a post about my public github repository: https://github.com/wargus85/PythonLDAPScripts

There is more information in the README file in the repository. However, to summarise, I’ve written some python scripts that will reach out to active directory and look up group membership. Because I couldn’t find something similar when I started coding the scripts, I thought it would be a good idea to publish them online.

Configuring Ubiquiti Powerbeam with custom TLS Certificates

Background

I recently re-connected the WA Freenet (discord here), an open WiFi wide-area-network that spans the Perth metropolitan area. Perth has an ideal geography for a WiFi network as it is extraordinarily flat, with an escarpment running along the eastern spine with excellent line-of-site to the suburbs.

I had joined the WAFN many years ago and it is thanks to the other operators on the network that, at the time, I was able to learn how to configure IPv4 subnets, firewalls, and BGP routing, skills that have served me well in my professional career.

Moving forward to the present day, I was pleasantly surprised to find that I was able to connect to an existing AP with links to the backbone of the network from my address. So I purchased two new radios – a Powerbeam AC 500 for my link in Ardross, and a Powerbeam 5AC Gen2 for my roof at home. Without much hassle, and with the support of the WAFN community, I was able to reconfigure my Ubiquiti Edgerouter with BGP, advertise my routes and accept those advertised to me.

Problem:

I personally hate seeing the ‘this website is insure’ messages that appear when a site uses self signed certificates. At home, I secure all my internal websites, devices and appliances with my internal CA certificates. So I wanted to do the same for the new radios. However, I was unable to find a website that outlined the entire process end-to-end, so thought I should write one myself.

Process:

Firstly, download the custom-script firmware for your device and install it. From the table below, it is easy to deduce the URL to download the appropriate firmware from UBNT.

Firmware Table Examples
Version Model URL of Hardware Non-CS CS
8.7.11 Powerbeam 5AC 500 UBNT Website https://dl.ui.com/firmwares/XC-fw/v8.7.11/XC.v8.7.11.46972.220614.0419.bin https://dl.ui.com/firmwares/XC-fw/v8.7.11/XC.v8.7.11-cs.46972.220614.0419.bin
8.7.11 Powerbeam 5AC Gen 2 UBNT Website https://dl.ui.com/firmwares/XC-fw/v8.7.11/WA.v8.7.11.46972.220614.0420.bin https://dl.ui.com/firmwares/XC-fw/v8.7.11/WA.v8.7.11-cs.46972.220614.0420.bin

Don’t forget also, that you will need to setup DNS to point to your device with its internal hostname. 

The custom script (CS) firmware version is important because it is necessary to run a script on the device at boot time.

Then, generate your TLS certificates. How you do that is not covered here. I personally use easyrsa to manage my internal certificates. I also deploy my root CA certificate to all my devices via AD group policy on Windows, or via Ansible for my linux hosts.

You should have a certificate that looks something like the below, called server3.crt

-----BEGIN CERTIFICATE-----

MIIG1TCCB....

-----END CERTIFICATE-----

-----BEGIN RSA PRIVATE KEY-----

TlI0kiQCiPGN...

-----END RSA PRIVATE KEY-----

Then upload the certificate to your device:

scp server3.crt ubnt@<device>:/etc/persistent/https/server3.pem

Log into the powerbeam via SSH: Create rc file in ‘/etc/persistent/rc.poststart’ with following content

#!/usr/bin/sh
cp /etc/persistent/https/server3.pem /etc/server.pem
kill $(ps | grep [l]ighttpd | awk '{ print $1 }')

Then make it executable, save the configuration and reboot

chmod +x /etc/persistent/rc.poststart
save
reboot

And done! 

References: https://community.ui.com/questions/AirOS-8-custom-SSL-certificates-Guide-Resolved/fcf2d671-1933-4fe1-bdcb-ba33a94020e4 

Images

 

Vodafone NBN (FTTP) IPv6 Prefix Delegation on a Ubiquiti EdgeRouter Lite

Background

I’ve always been intimidated by IPv6, the addresses were long and confusing, and not fully understanding the technology made me nervous to integrate it into my systems. How did it work? Would enabling it expose everything in my LAN to the internet? Would I find myself under attack without realising it?

Well, the good news is that smarter people have already thought about the problems above (and more) and engineered an addressing system with built-in security and automatic configuration (SLAAC). So if you use a firewall on your router, IPv6 will give you the benefits of having externally accessible IP addresses on your LAN that can be routed, if you allow it.

I found a lot of misinformation and confusion around IPv6 online. So if you’re looking for general information on IPv6 review the videos below, I also found the instructions here: https://medium.com/@nurblieh/ipv6-on-the-edgerouter-lite-c95e3cc8d49d invaluable.

Finally, I was able to guess at the correct settings to use on Vodafone NBN to obtain IPv6 addresses. I called Vodafone’s support number – email, weirdly, wasn’t an option – unfortunately their support wasn’t great, and all I was able to find was that they do offer IPv6, but not technical advice. Thankfully their IPv6 technology implementation is standard to the point of being boring and I guessed at the right settings, eventually getting it right.

Configuration

To get IPv6 prefix delegation working on the edgerouter with Vodafone FTTP NBN a few steps need to be taken:

  1. Setup the firewalls WAN6_In and WAN6_LOCAL on the EdgeRouter to allow IPv6 traffic and assign it to the internet interface.
  2. Enable dhcpv6-pd on the internet Ethernet port, request a /56 from Vodafone.
  3. Delegate /64 subdomains to each interface on your network. If I understand it correctly, there should be 255 available networks to assign.
  4. I personally disable DNS name servers being advertised to

Generally I use the config tree to do configuration, however to save time, I’ve included below the relevant settings to enable IPv6 taken from the config file. If I’ve missed something please let me know in the comments.

The below settings contain the firewall settings for an IPv6 connection. I’ve removed IP addresses, but left the configuration to show to poke holes in the firewall to allow services through directly to servers. As it has been mentioned elsewhere on the internet, allowing ICMPv6 through is critical for IPv6 to function correctly.

firewall {
     ...
     ipv6-name WAN6_IN {
         default-action drop
         rule 10 {
             action accept
             description "allow established"
             protocol all
             state {
                 established enable
                 related enable
             }
         }
         rule 20 {
             action drop
             description "drop invalid packets"
             protocol all
             state {
                 invalid enable
             }
         }
         rule 30 {
             action accept
             description "allow ICMPv6"
             protocol icmpv6
         }
         rule 40 {
             action accept
             description "allow traffic for www"
             destination {
                 address xxx
             }
             protocol tcp
         }
         rule 41 {
             action accept
             description "Allow SSH (v6) to Atlas"
             destination {
                 address xxx
                 port 22
             }
             protocol tcp
         }
         rule 42 {
             action accept
             description "Allow Ipv6 to Plex"
             destination {
                 address xxx
                 port 32400
             }
             protocol tcp
         }
     }
     ipv6-name WAN6_LOCAL {
         default-action drop
         rule 10 {
             action accept
             description "allow established"
             protocol all
             state {
                 established enable
                 related enable
             }
         }
         rule 20 {
             action drop
             description "drop invalid packets"
             protocol all
             state {
                 invalid enable
             }
         }
         rule 30 {
             action accept
             description "allow ICMPv6"
             protocol icmpv6
         }
         rule 40 {
             action accept
             description "allow DHCPv6 client/server"
             destination {
                 port 546
             }
             protocol udp
             source {
                 port 547
             }
         }
     }

The below section contains the details on how to configure an interface for dhcpv6-pd. You may notice that I have multiple interfaces that I have advertised IPv6 to. It’s important to realise the function of the prefix-id and host address fields. The prefix-id is, apparently, a 2 digit hexadecimal number from 00 to FF representing 0-255. It indicates which /64 subnet to assign to the interface. The host address is the address that the router will assign itself on that subnet. I’ve disabled dns on my interfaces, as I would prefer my systems to use my internal DNS for all requests and my internal DNS servers are configured to resolve A and AAAA records.

interfaces {
     ethernet eth0 {
         address dhcp
         description "Internet (IPoE)"
         dhcpv6-pd {
             pd 0 {
                 interface eth1 {
                     host-address ::1
                     no-dns
                     prefix-id :1
                     service slaac
                 }
                 interface eth1.3 {
                     host-address ::1
                     no-dns
                     prefix-id :2
                     service slaac
                 }
                 interface eth1.4 {
                     host-address ::1
                     no-dns
                     prefix-id :3
                     service slaac
                 }
                 interface eth2 {
                     host-address ::1
                     no-dns
                     prefix-id :4
                     service slaac
                 }
                 prefix-length /56
             }
             rapid-commit enable
         }
         duplex auto
         firewall {
             in {
                 ipv6-name WAN6_IN
                 name WAN_IN
             }
             local {
                 ipv6-name WAN6_LOCAL
                 name WAN_LOCAL
             }
         }
         ip {
         }
         ipv6 {
             address {
                 autoconf
             }
             dup-addr-detect-transmits 1
         }
         mtu 1500
         speed auto

Further Notes

I found that implementing IPv6 wasn’t perfect. As I kept playing with the settings, my networked hosts would, using SLAAC, get another IPv6 address without removing the old one and instead just mark them as stale. The easy fix was to manually remove the addresses, but it was tedious. I also had to modify my web server’s configs to properly respond to IPv6 requests. I did that by adding [::]:443 to the virtual host directive eg: <VirtualHost blog.warbel.net:443 [::]:443>

Tools

https://www.vultr.com/resources/subnet-calculator-ipv6/ – The Vultr IPv6 subnet calculator is very handy. As is this calculator: http://www.gestioip.net/cgi-bin/subnet_calculator.cgi

IPv6 up and running

Automating Letsencrypt Wildcard Certificate renewal with Mail in a Box.

I use letsencrypt, a fantastic free service to secure my websites. However, like many people I have multiple sub-domains for my blogs, gitea and whatever else I fancy spinning up. These are all hosted on a separate box, not my MIAB box. So when Letsencrypt announced their wildcard certificate I jumped on board.
The only problem I had with wildcard certificates were the extra steps required automate the whole process. The issue was that Letsencrypt (or certibot, the program that does the work) required txt records to be updated in publicly available DNS records.
As I also use Mail in a Box for my public DNS and email I had to write some scripts and entries in crontab to automate the renewal process. Thankfully Mail-in-a-box has an API for doing exactly this.

Notes and Caveats:

  • I’ve seen other blog entries that do the same thing (minus the automation) for a single certificate. However when I generated my certs some time ago, I followed these instructions. As such my cleanup.sh script did not work as expected, so run it separately, not via the –manual-cleanup-hook call.
  • Read the documentation if you have problems: https://certbot.eff.org/docs/using.html#pre-and-post-validation-hooks
  • If you figure out how to make the cleanup script work as a hook – please let me know in the comments 😉
  • I assume you already have certibot/letsencrypt installed
  • You will need to substitute your credentials in the scripts – I have a specific account I use for just this, so my personal email/admin account isn’t compromised with the password stored in plain text.
  • Make sure your scripts have the executable bit set (chmod +x).
  • My crontab entries will email root, which is aliased out, to my personal email address.
  • Cron jobs (see comments in scripts) will run at midnight and 5 minutes past midnight respectively, every 5 days.

The Scripts:

These are super basic:
/root/renewal.sh

#!/bin/bash
#add to crontab like this:
#0 0 */5 * * /root/renewal.sh | mail s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
sudo certbot certonly -n --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --manual --manual-authhook ./authenticator.sh --preferred-challenges dns -d "dom.ain, *.dom.ain"

/root/authenticator.sh

 #!/bin/bash

curl -s -X PUT -d "$CERTBOT_VALIDATION" --user user@dom.ain:<Password>  https://<your.maib.url>/admin/dns/custom/_acme-challenge.dom.ain/txt 

/root/cleanup.sh

#!/bin/bash
#Add to crontab:
#5 0 */5 * * /root/renewal.sh | mail -s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
# get the txt record
TOREMOVE=`curl -s -X GET --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt  | grep "value" | awk '{print $2}'| sed 's/"//g'`
echo "removing $TOREMOVE"
curl -s -X DELETE -d "$TOREMOVE" --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt 
service apache2 restart

Finally, if you want to check the certificate run:

certbot certificates

Improved Speedtest-CLI

I’ve recently changed ISPs and wanted to see how my speeds compared to those reported by my ISP. However the speedtest utility on Linux only tests one server at a time. While it is possible to specify a server, and get a list of local servers to test, I would rather automate the process. And I did!

I’ve made a speedtest wrapper in Python 3 and added it to my private gitea repository: https://gitea.warbel.net/wargus/improved-speedtest

By default the program will search for all servers matching a specified string, then test each one of them one after the other.

Self-hosting Git with Gitea, Lxc and Apache

This is a quick tutorial on how to setup Gitea on Ubuntu 18.04.2 using Linux Containers (LXC). As I’ve already setup LXC, this will assume you already have a working configuration. I’ve also assumed you have Apache 2 working with the proxy modules running.

Why Use Gitea?

Gitea is an opensource replacement of Github, with many of the same features. While I personally use and enjoy Github, I’ve always wanted the freedom to keep my code on my server. If Github ever changes, I’ll always have my own repositories protected and managed in a way that I choose.

Getting Started:

In my example, I’ll be creating a new Ubuntu LXC container from the image ‘ubuntu’ called gitea and then opening a shell in the new container.

$ lxc launch ubuntu gitea; lxc exec gitea -- bash

From the shell, update all the packages, install bash-completion, create a new non-super user account for yourself and for the new service, gitea:

# apt update; apt full-upgrade -y; apt install bash-completion ssh
# adduser <your-username>;adduser git --system;sudo addgroup git
# exit

Log into your new container via ssh with your unprivileged account. First check the IP address with lxc list:

lxc list
ssh <username>@ip_address

Create the directories you need for gitea and change the permissions:

$ sudo mkdir -p /etc/gitea /var/lib/gitea
$ sudo chown git:git /etc/gitea /var/lib/gitea

Download gitea, check the version, as the current version was 1.7.0 at the time of writing:

$ cd /usr/local/bin
$ sudo wget -O gitea https://dl.gitea.io/gitea/1.7.0/gitea-1.7.0-linux-amd64

At this point you should be good to follow the instructions on Gitea to setup the service: https://docs.gitea.io/en-us/linux-service/

At this point, the last step is to start the service and get the basic configuration done. You’ll need to log into the service on the ipaddress:3000 of the container.
I’ve chosen to setup gitea with a sqlite3 db for simplicity. In larger organisations, you might consider using a mariadb/mysql and setting up LDAP integration.

Setting up Apache

To simplify and easily identify my sub-domain configuration, I have separate config files for each website and I recommend you doing the same. Create your config file in /etc/apache2/sites-available. As I have a wildcard certificate from letsencrypt, I can simply reuse large chunks of configuration from other virtual hosts. In my example, my gitea website is called gitea.warbel.net.

Apache is running on another LXC container, which I’ve logged into.

$ sudo touch /etc/apache2/sites-available/006-gitea.warbel.net.conf

Open the new file and copy in the config. At this point consider looking at the documentation from gitea: https://docs.gitea.io/en-us/reverse-proxies/
However my configuration is here. Note that it actually includes two configurations. The first on port 80, redirects the user to port 443. That section follows on. As it reverse proxies the entire subdomain, pay close attention to the proxypass and proxypassreverse directives; a missplaced ‘/’ can really screw things up!

Setting Up Gitea

At this point you should be able to connect to your gitea website on its subdomain. As the documentation on the gitea website makes clear, be sure to make gitea aware that it is behind a proxy by setting the directive in the application config:
[server] ROOT_URL = http://git.example.com/git/

Other Considerations

I set my gitea up with sqlite3 as the database driver, for simplicity’s sake. Given I already have mariadb established I could have configured it with a database, but give the traffic considerations (ie, I’ll likely be the only contributor) its seemed like overkill.

Additionally, I have reconfigured my LXC container to set itself up with a static IP address and added its IP address to my ansible update scripts for maintenance/documentation purposes. I’ve also setup gitea with a email address on my mail server to ensure that it can email me notifications.

Troubleshooting

Most of the problems I faced were quite early on – and mostly centred around directory and permissions. If you run the gitea service manually, check the logs/output of systemctl for any errors.

Connecting to the MET Office Weather Observation Website

A short post on contributing to the MET Weather Observation Website.
(What to do with all this data I have)

A colleague pointed out that the Bureau of Meteorology is contributing to a massive online project hosted by the MET Office that collects weather observation information from the community, link here, and that I should contribute my data.

My only problem was that my data is collected from a bespoke weather station and lacked a mechanism that could push the data to the MET Office’s servers. So using their documentation, I wrote one in Python and set it up as a cron job on my server.

Here is my weather station website on the MET site. And a link to my script, in Python that does the work.