Learning PUML for Project Design and Delivery

I stumbled across the “C4 Model for visualising software architecture” via a blogpost I saw on HackerNews back before Christmas and had been waiting for an opportunity to try out the model for myself. I’ve always hated making system design diagrams, probably because I’ve always found it:

  • time consuming;
  • impossible to meaningfully track changes.
  • difficult because I lacked appropriate tooling and an methodological framework and;
  • boring.

I liked what I saw in the above links because it addressed the first three problems I had: I could cut down how long it took to make diagrams by writing PUML code, track changes in a git repository, and configure VSCode with plugins to generate the images and relationships programmatically for usage in documentation/wikis. And because it meant I could write my diagrams in code, it fixed the 4th problem and now, I actually enjoy making diagrams.

I now use the Vscode PlantUML Plugin which on linux also required the installation of graphviz:

sudo apt install graphviz -y

And after some basic configuration to get the plugin working it becomes quite easy to work on creating diagrams.

@startuml
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Container.puml

Person(personAlias, "Label", "Optional Description")
Container(containerAlias, "Label", "Technology", "Optional Description")
System(systemAlias, "Label", "Optional Description")

Rel(personAlias, containerAlias, "Label", "Optional Technology")
@enduml 

Screenshot of VScode:

VSCode and rendering plugin

Obviously I cannot include any of the diagrams I’ve made for work here on my blog, so I’ve used code and examples from the Git repository for the C4 Model instead. The repository includes everything you need to generate visually pleasing models. I found that after a few days of using the examples in the repo, I’ve been able to make quite useful diagrams for our new file transfer system I’ve been designing at work that  illustrates the system we’re designing and building. The C4 Model repository also makes use of another repository of images/sprites which I’ve utilised to add some flair and comprehensibility to the diagrams.

Python Scripts for Interacting with LDAP

This is more or less just a post about my public github repository: https://github.com/wargus85/PythonLDAPScripts

There is more information in the README file in the repository. However, to summarise, I’ve written some python scripts that will reach out to active directory and look up group membership. Because I couldn’t find something similar when I started coding the scripts, I thought it would be a good idea to publish them online.

Configuring Ubiquiti Powerbeam with custom TLS Certificates

Background

I recently re-connected the WA Freenet (discord here), an open WiFi wide-area-network that spans the Perth metropolitan area. Perth has an ideal geography for a WiFi network as it is extraordinarily flat, with an escarpment running along the eastern spine with excellent line-of-site to the suburbs.

I had joined the WAFN many years ago and it is thanks to the other operators on the network that, at the time, I was able to learn how to configure IPv4 subnets, firewalls, and BGP routing, skills that have served me well in my professional career.

Moving forward to the present day, I was pleasantly surprised to find that I was able to connect to an existing AP with links to the backbone of the network from my address. So I purchased two new radios – a Powerbeam AC 500 for my link in Ardross, and a Powerbeam 5AC Gen2 for my roof at home. Without much hassle, and with the support of the WAFN community, I was able to reconfigure my Ubiquiti Edgerouter with BGP, advertise my routes and accept those advertised to me.

Problem:

I personally hate seeing the ‘this website is insure’ messages that appear when a site uses self signed certificates. At home, I secure all my internal websites, devices and appliances with my internal CA certificates. So I wanted to do the same for the new radios. However, I was unable to find a website that outlined the entire process end-to-end, so thought I should write one myself.

Process:

Firstly, download the custom-script firmware for your device and install it. From the table below, it is easy to deduce the URL to download the appropriate firmware from UBNT.

Firmware Table Examples
Version Model URL of Hardware Non-CS CS
8.7.11 Powerbeam 5AC 500 UBNT Website https://dl.ui.com/firmwares/XC-fw/v8.7.11/XC.v8.7.11.46972.220614.0419.bin https://dl.ui.com/firmwares/XC-fw/v8.7.11/XC.v8.7.11-cs.46972.220614.0419.bin
8.7.11 Powerbeam 5AC Gen 2 UBNT Website https://dl.ui.com/firmwares/XC-fw/v8.7.11/WA.v8.7.11.46972.220614.0420.bin https://dl.ui.com/firmwares/XC-fw/v8.7.11/WA.v8.7.11-cs.46972.220614.0420.bin

Don’t forget also, that you will need to setup DNS to point to your device with its internal hostname. 

The custom script (CS) firmware version is important because it is necessary to run a script on the device at boot time.

Then, generate your TLS certificates. How you do that is not covered here. I personally use easyrsa to manage my internal certificates. I also deploy my root CA certificate to all my devices via AD group policy on Windows, or via Ansible for my linux hosts.

You should have a certificate that looks something like the below, called server3.crt

-----BEGIN CERTIFICATE-----

MIIG1TCCB....

-----END CERTIFICATE-----

-----BEGIN RSA PRIVATE KEY-----

TlI0kiQCiPGN...

-----END RSA PRIVATE KEY-----

Then upload the certificate to your device:

scp server3.crt ubnt@<device>:/etc/persistent/https/server3.pem

Log into the powerbeam via SSH: Create rc file in ‘/etc/persistent/rc.poststart’ with following content

#!/usr/bin/sh
cp /etc/persistent/https/server3.crt /etc/server.pem
kill $(ps | grep [l]ighttpd | awk '{ print $1 }')

Then make it executable, save the configuration and reboot

chmod +x /etc/persistent/rc.poststart
save
reboot

And done! 

References: https://community.ui.com/questions/AirOS-8-custom-SSL-certificates-Guide-Resolved/fcf2d671-1933-4fe1-bdcb-ba33a94020e4 

Images

 

Vodafone NBN (FTTP) IPv6 Prefix Delegation on a Ubiquiti EdgeRouter Lite

Background

I’ve always been intimidated by IPv6, the addresses were long and confusing, and not fully understanding the technology made me nervous to integrate it into my systems. How did it work? Would enabling it expose everything in my LAN to the internet? Would I find myself under attack without realising it?

Well, the good news is that smarter people have already thought about the problems above (and more) and engineered an addressing system with built-in security and automatic configuration (SLAAC). So if you use a firewall on your router, IPv6 will give you the benefits of having externally accessible IP addresses on your LAN that can be routed, if you allow it.

I found a lot of misinformation and confusion around IPv6 online. So if you’re looking for general information on IPv6 review the videos below, I also found the instructions here: https://medium.com/@nurblieh/ipv6-on-the-edgerouter-lite-c95e3cc8d49d invaluable.

Finally, I was able to guess at the correct settings to use on Vodafone NBN to obtain IPv6 addresses. I called Vodafone’s support number – email, weirdly, wasn’t an option – unfortunately their support wasn’t great, and all I was able to find was that they do offer IPv6, but not technical advice. Thankfully their IPv6 technology implementation is standard to the point of being boring and I guessed at the right settings, eventually getting it right.

Configuration

To get IPv6 prefix delegation working on the edgerouter with Vodafone FTTP NBN a few steps need to be taken:

  1. Setup the firewalls WAN6_In and WAN6_LOCAL on the EdgeRouter to allow IPv6 traffic and assign it to the internet interface.
  2. Enable dhcpv6-pd on the internet Ethernet port, request a /56 from Vodafone.
  3. Delegate /64 subdomains to each interface on your network. If I understand it correctly, there should be 255 available networks to assign.
  4. I personally disable DNS name servers being advertised to

Generally I use the config tree to do configuration, however to save time, I’ve included below the relevant settings to enable IPv6 taken from the config file. If I’ve missed something please let me know in the comments.

The below settings contain the firewall settings for an IPv6 connection. I’ve removed IP addresses, but left the configuration to show to poke holes in the firewall to allow services through directly to servers. As it has been mentioned elsewhere on the internet, allowing ICMPv6 through is critical for IPv6 to function correctly.

firewall {
     ...
     ipv6-name WAN6_IN {
         default-action drop
         rule 10 {
             action accept
             description "allow established"
             protocol all
             state {
                 established enable
                 related enable
             }
         }
         rule 20 {
             action drop
             description "drop invalid packets"
             protocol all
             state {
                 invalid enable
             }
         }
         rule 30 {
             action accept
             description "allow ICMPv6"
             protocol icmpv6
         }
         rule 40 {
             action accept
             description "allow traffic for www"
             destination {
                 address xxx
             }
             protocol tcp
         }
         rule 41 {
             action accept
             description "Allow SSH (v6) to Atlas"
             destination {
                 address xxx
                 port 22
             }
             protocol tcp
         }
         rule 42 {
             action accept
             description "Allow Ipv6 to Plex"
             destination {
                 address xxx
                 port 32400
             }
             protocol tcp
         }
     }
     ipv6-name WAN6_LOCAL {
         default-action drop
         rule 10 {
             action accept
             description "allow established"
             protocol all
             state {
                 established enable
                 related enable
             }
         }
         rule 20 {
             action drop
             description "drop invalid packets"
             protocol all
             state {
                 invalid enable
             }
         }
         rule 30 {
             action accept
             description "allow ICMPv6"
             protocol icmpv6
         }
         rule 40 {
             action accept
             description "allow DHCPv6 client/server"
             destination {
                 port 546
             }
             protocol udp
             source {
                 port 547
             }
         }
     }

The below section contains the details on how to configure an interface for dhcpv6-pd. You may notice that I have multiple interfaces that I have advertised IPv6 to. It’s important to realise the function of the prefix-id and host address fields. The prefix-id is, apparently, a 2 digit hexadecimal number from 00 to FF representing 0-255. It indicates which /64 subnet to assign to the interface. The host address is the address that the router will assign itself on that subnet. I’ve disabled dns on my interfaces, as I would prefer my systems to use my internal DNS for all requests and my internal DNS servers are configured to resolve A and AAAA records.

interfaces {
     ethernet eth0 {
         address dhcp
         description "Internet (IPoE)"
         dhcpv6-pd {
             pd 0 {
                 interface eth1 {
                     host-address ::1
                     no-dns
                     prefix-id :1
                     service slaac
                 }
                 interface eth1.3 {
                     host-address ::1
                     no-dns
                     prefix-id :2
                     service slaac
                 }
                 interface eth1.4 {
                     host-address ::1
                     no-dns
                     prefix-id :3
                     service slaac
                 }
                 interface eth2 {
                     host-address ::1
                     no-dns
                     prefix-id :4
                     service slaac
                 }
                 prefix-length /56
             }
             rapid-commit enable
         }
         duplex auto
         firewall {
             in {
                 ipv6-name WAN6_IN
                 name WAN_IN
             }
             local {
                 ipv6-name WAN6_LOCAL
                 name WAN_LOCAL
             }
         }
         ip {
         }
         ipv6 {
             address {
                 autoconf
             }
             dup-addr-detect-transmits 1
         }
         mtu 1500
         speed auto

Further Notes

I found that implementing IPv6 wasn’t perfect. As I kept playing with the settings, my networked hosts would, using SLAAC, get another IPv6 address without removing the old one and instead just mark them as stale. The easy fix was to manually remove the addresses, but it was tedious. I also had to modify my web server’s configs to properly respond to IPv6 requests. I did that by adding [::]:443 to the virtual host directive eg: <VirtualHost blog.warbel.net:443 [::]:443>

Tools

https://www.vultr.com/resources/subnet-calculator-ipv6/ – The Vultr IPv6 subnet calculator is very handy. As is this calculator: http://www.gestioip.net/cgi-bin/subnet_calculator.cgi

IPv6 up and running

Automating Letsencrypt Wildcard Certificate renewal with Mail in a Box.

I use letsencrypt, a fantastic free service to secure my websites. However, like many people I have multiple sub-domains for my blogs, gitea and whatever else I fancy spinning up. These are all hosted on a separate box, not my MIAB box. So when Letsencrypt announced their wildcard certificate I jumped on board.
The only problem I had with wildcard certificates were the extra steps required automate the whole process. The issue was that Letsencrypt (or certibot, the program that does the work) required txt records to be updated in publicly available DNS records.
As I also use Mail in a Box for my public DNS and email I had to write some scripts and entries in crontab to automate the renewal process. Thankfully Mail-in-a-box has an API for doing exactly this.

Notes and Caveats:

  • I’ve seen other blog entries that do the same thing (minus the automation) for a single certificate. However when I generated my certs some time ago, I followed these instructions. As such my cleanup.sh script did not work as expected, so run it separately, not via the –manual-cleanup-hook call.
  • Read the documentation if you have problems: https://certbot.eff.org/docs/using.html#pre-and-post-validation-hooks
  • If you figure out how to make the cleanup script work as a hook – please let me know in the comments 😉
  • I assume you already have certibot/letsencrypt installed
  • You will need to substitute your credentials in the scripts – I have a specific account I use for just this, so my personal email/admin account isn’t compromised with the password stored in plain text.
  • Make sure your scripts have the executable bit set (chmod +x).
  • My crontab entries will email root, which is aliased out, to my personal email address.
  • Cron jobs (see comments in scripts) will run at midnight and 5 minutes past midnight respectively, every 5 days.

The Scripts:

These are super basic:
/root/renewal.sh

#!/bin/bash
#add to crontab like this:
#0 0 */5 * * /root/renewal.sh | mail s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
sudo certbot certonly -n --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --manual --manual-authhook ./authenticator.sh --preferred-challenges dns -d "dom.ain, *.dom.ain"

/root/authenticator.sh

 #!/bin/bash

curl -s -X PUT -d "$CERTBOT_VALIDATION" --user user@dom.ain:<Password>  https://<your.maib.url>/admin/dns/custom/_acme-challenge.dom.ain/txt 

/root/cleanup.sh

#!/bin/bash
#Add to crontab:
#5 0 */5 * * /root/renewal.sh | mail -s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
# get the txt record
TOREMOVE=`curl -s -X GET --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt  | grep "value" | awk '{print $2}'| sed 's/"//g'`
echo "removing $TOREMOVE"
curl -s -X DELETE -d "$TOREMOVE" --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt 
service apache2 restart

Finally, if you want to check the certificate run:

certbot certificates

Improved Speedtest-CLI

I’ve recently changed ISPs and wanted to see how my speeds compared to those reported by my ISP. However the speedtest utility on Linux only tests one server at a time. While it is possible to specify a server, and get a list of local servers to test, I would rather automate the process. And I did!

I’ve made a speedtest wrapper in Python 3 and added it to my private gitea repository: https://gitea.warbel.net/wargus/improved-speedtest

By default the program will search for all servers matching a specified string, then test each one of them one after the other.

Self-hosting Git with Gitea, Lxc and Apache

This is a quick tutorial on how to setup Gitea on Ubuntu 18.04.2 using Linux Containers (LXC). As I’ve already setup LXC, this will assume you already have a working configuration. I’ve also assumed you have Apache 2 working with the proxy modules running.

Why Use Gitea?

Gitea is an opensource replacement of Github, with many of the same features. While I personally use and enjoy Github, I’ve always wanted the freedom to keep my code on my server. If Github ever changes, I’ll always have my own repositories protected and managed in a way that I choose.

Getting Started:

In my example, I’ll be creating a new Ubuntu LXC container from the image ‘ubuntu’ called gitea and then opening a shell in the new container.

$ lxc launch ubuntu gitea; lxc exec gitea -- bash

From the shell, update all the packages, install bash-completion, create a new non-super user account for yourself and for the new service, gitea:

# apt update; apt full-upgrade -y; apt install bash-completion ssh
# adduser <your-username>;adduser git --system;sudo addgroup git
# exit

Log into your new container via ssh with your unprivileged account. First check the IP address with lxc list:

lxc list
ssh <username>@ip_address

Create the directories you need for gitea and change the permissions:

$ sudo mkdir -p /etc/gitea /var/lib/gitea
$ sudo chown git:git /etc/gitea /var/lib/gitea

Download gitea, check the version, as the current version was 1.7.0 at the time of writing:

$ cd /usr/local/bin
$ sudo wget -O gitea https://dl.gitea.io/gitea/1.7.0/gitea-1.7.0-linux-amd64

At this point you should be good to follow the instructions on Gitea to setup the service: https://docs.gitea.io/en-us/linux-service/

At this point, the last step is to start the service and get the basic configuration done. You’ll need to log into the service on the ipaddress:3000 of the container.
I’ve chosen to setup gitea with a sqlite3 db for simplicity. In larger organisations, you might consider using a mariadb/mysql and setting up LDAP integration.

Setting up Apache

To simplify and easily identify my sub-domain configuration, I have separate config files for each website and I recommend you doing the same. Create your config file in /etc/apache2/sites-available. As I have a wildcard certificate from letsencrypt, I can simply reuse large chunks of configuration from other virtual hosts. In my example, my gitea website is called gitea.warbel.net.

Apache is running on another LXC container, which I’ve logged into.

$ sudo touch /etc/apache2/sites-available/006-gitea.warbel.net.conf

Open the new file and copy in the config. At this point consider looking at the documentation from gitea: https://docs.gitea.io/en-us/reverse-proxies/
However my configuration is here. Note that it actually includes two configurations. The first on port 80, redirects the user to port 443. That section follows on. As it reverse proxies the entire subdomain, pay close attention to the proxypass and proxypassreverse directives; a missplaced ‘/’ can really screw things up!

Setting Up Gitea

At this point you should be able to connect to your gitea website on its subdomain. As the documentation on the gitea website makes clear, be sure to make gitea aware that it is behind a proxy by setting the directive in the application config:
[server] ROOT_URL = http://git.example.com/git/

Other Considerations

I set my gitea up with sqlite3 as the database driver, for simplicity’s sake. Given I already have mariadb established I could have configured it with a database, but give the traffic considerations (ie, I’ll likely be the only contributor) its seemed like overkill.

Additionally, I have reconfigured my LXC container to set itself up with a static IP address and added its IP address to my ansible update scripts for maintenance/documentation purposes. I’ve also setup gitea with a email address on my mail server to ensure that it can email me notifications.

Troubleshooting

Most of the problems I faced were quite early on – and mostly centred around directory and permissions. If you run the gitea service manually, check the logs/output of systemctl for any errors.

Connecting to the MET Office Weather Observation Website

A short post on contributing to the MET Weather Observation Website.
(What to do with all this data I have)

A colleague pointed out that the Bureau of Meteorology is contributing to a massive online project hosted by the MET Office that collects weather observation information from the community, link here, and that I should contribute my data.

My only problem was that my data is collected from a bespoke weather station and lacked a mechanism that could push the data to the MET Office’s servers. So using their documentation, I wrote one in Python and set it up as a cron job on my server.

Here is my weather station website on the MET site. And a link to my script, in Python that does the work.

Building an Arduino Weather Station with ELK Stack

Having recently been introduced to the Elastic, Logstash, Kibana (ELK) stack by a colleague and having wanted to build a home-brew weather station for some time, I decided to combine both projects. There was surprisingly little information available online about building an arduino weather station and although it had been done, everyone seems to have their own take, and hardware requirements. Two sites provided useful information for this project:

  1. The manufacturer’s wiki of the weather station components and;
  2. This blog entry, written by a staff member at Elastic.co.

As a side note, once I figured out how to use ELK Stack and setup my data types, I also integrated information from a Fronius Inverter too. The inverter has a well documented API that outputs its data in JSON format. I used logstash to periodically pull the data from the inverter and output it into Elasticsearch. My Arduino is setup in a similar way to the Inverter: It hosts a simple website that only displays the current weather data in a JSON format.

Things you will need:

  • An ELK stack. Elastic co provide significant documentation on setting up Elastic stack, so for the sake of simplicity I wont be covering this here.
  • A weather station with anemometer from dfrobot or a supplier, I bought my equipment from littlebird electronics. 
  • An Arduino with an Ethernet shield, or an Freetronics Etherten.
  • A passive PoE adaptor to supply power to your Arduino.
  • The latest version of the Arduino IDE, and the Time library added into your Arduino environment. I prefer to do my coding in Microsoft Visual Studio Code which has plugin support for Arduino.
  • Cat6 cable, telephone cabling (with RJ11 headers), some M->F breadboard cabling, waterproofed project box, appropriate mast that can support the weather station if needed.
  • Access to my source code on Github.

Building and Programming the Hardware:

In my case, I opted to build a new, more stable mast, entirely separate from my TV antenna mast. Bunnings was able to provide the mast equipment I needed. As you may notice, I have also mounted a Ubiquiti Rocket M5 with a 120 degree antenna (nothing to do with this project!).

New Mast

Unfortunately the serial cables provided in the weather station kit were not long enough to do anything useful, so I had to buy two RJ11 joiners, RJ11 headers and cabling to extend the cabling to the eves where I mounted the electronics.

Roofing cabling

Project box mounted under the eves

 

 

 

 

 

 

 

 

It ended up being quite a task running the cabling to the eves, but it was worth it in the end! As I wanted to mount the sensors outside, I opted to go with a power-over-ethernet (PoE) system which required a Cat6 cable and passive PoE injectors. The weather station sensors on the roof are connected to the weather station circuit board via the two RJ11 cables Pictured below:

  

The Arduino and sensors were mounted DIY fashion inside a project box. I used a dremel to cut a hole in the plastic lid to feed the cabling through so that it would sit flush with the eves.

Programming:

Arduino:

In order to collect and output the data I had to recycle some code from the Arduino website, and some code from other places (credit in the code). My contribution was building in some error checking. The sensor unit, as it turned out, was not always accurate and would quite often return garbage data. In order to fix this I programmed the arduino to check the data twice and compare the results. Both results had to pass a sanity check: Here in Perth, WA it never gets below -5 or above 55, nor does the barometric pressure wildly fluctuate either. So, only report the data if the data was within range, and the values had to be reasonable close to each other. Strictly speaking this will not stop all bad data, but it seems to do a good enough job 99.99% of the time. See above for a link to my github project.

ELK Stack:

I’ve provided on github the pipeline configuration you will need in order to use logstash to connect to your arduino and pull the json data. If you don’t have an openweathermap account, it is possible to remove the appropriate section from the 01-http-weather.conf file. You will need to add the files to your /etc/logstash/conf.d directory and add the pipelines to logstash by modifying the pipelines.yml file.

The last step will be to use the code snippets in the mappings_script file.  From within the dev tools section of  Kibana, run the below PUT command. This will tell Elasticsearch about the data coming in and correctly map the data types:

PUT /weather/_mappings/doc
{
"properties": {
"coord": {"type": "geo_point"},
"coordlocal": {"type": "geo_point"},
"dt": {"type": "date"},
"localdt": {"type": "date"},
"sys.sunrise": {"type": "date"},
"sys.sunset": {"type": "date"},
"clouds.all": {"type": "float"},
"main.humidity": {"type": "float"},
"main.temp": {"type": "float"},
"main.pressure": {"type": "float"},
"rain.3hr": {"type": "float"},
"visibility": {"type": "float"},
"weather.humidity": {"type": "float"},
"weather.temp": {"type": "float"},
"weather.pressure": {"type": "float"},
"wind.deg": {"type": "float"},
"wind.speed": {"type": "float"},
"wind.localspeed": {"type": "float"},
"wind.localdeg": {"type": "float"},
"wind.localgust": {"type": "float"},
"localrain.1h": {"type": "float"},
"localrain.24h": {"type": "float"}
}
}

And that is it!

At this point, you should have a working weather station with log stash pulling the data and pushing it to elasticsearch. I haven’t detailed how to build the graphs or dashboards to display the data, but there is plenty of documentation available online. 

Sensor Inaccuracy/Future Improvements:

After mounting the sensors I had discovered that the heat generated from the electronics does interfere with the reading. The top line represents the temperature read from my weather station, and the bottom line is data from OpenWeatherMap.

The solution, when I have time later, will be to drill more holes in the project box to allow better air flow and partition the arduino away from the sensors to better isolate the heat that is generated.

 

Using Xnest or Putty/VcXsrv to Start a Full Remote Session

As useful as ssh and command-line (CLI) tools are when using a Linux/Ubuntu system remotely, sometimes it simply isn’t enough. And I dare say, sometimes it’s better to use a graphical tool to do something than it is to use the CLI alternative- parted/gparted: I’m looking at you!

Sometimes you may even want to use a full graphical desktop remotely (audible gasps!). This can be handy if you’re developing a desktop system on a remote server or virtual machine and want to experience the system the same way as the end user. I’ve found it useful when developing Linux code from a Windows desktop, and want to run a full development environment remotely. Setting up a remote session too, I’ve found, resolves some quirky display issues when using a proprietary program with a GUI remotely.

In this tutorial I’ll show you how to tunnel X11 through ssh to both a computer running Windows with Putty and VcXsrv and a Linux computer with Xnest. I will show you how to tunnel and entire desktop session or an individual program.

Before we begin, configure the remote server:

I’m assuming that port 22 is open on your remote server and you are capable of logging into the server with ssh.

Edit the file /etc/ssh/sshd_config (with sudo if necessary)

sudo vi /etc/ssh/sshd_config

And ensure that X11Forwarding yes is set and set AddressFamily inet:

X11Forwarding yes
...
AddressFamily inet

WARNING: Setting AddressFamily to inet will force ssh to only use ipv4 addresses. If you use IPv6, then this tutorial is not for you!

Next, install lxde – a lightweight desktop environment:

sudo apt install lxde lxdesession 

Generally, I prefer using Linux Mint with Cinnamon, but LXDE provides a nice environment that does not use a lot of network bandwidth. There are alternatives through, and I do suggest you look into them if you’re interested.

Finally, restart ssh, log out and log back into your server to make sure everything is working.

sudo service ssh restart;exit

Lets start with the easier option first: Linux:

Running a single program remotely:

At this point your remote server is already configured to allow ssh +X11 forwarding. if you log into your remote server and run a graphical program (assuming it is installed) the program will magically appear in front of you. Be aware that it will take over your terminal unless you push it to the background with ampersand (&), for instance:

user@RemoteServer$ gparted &

If you experience trouble at this point, it is worth connecting with ssh using the -vv -X flags to see the error messages.

Running an entire session remotely: 

On your Linux/Ubuntu/Mint Desktop, open a terminal and install xnest:

sudo apt install xnest

Xnest is both an X11 server and X11 client. There is a lot written about it, so again, google is your friend if you want more information. Once insalled, create a new script called remote_session.sh, make it executable and edit it:

touch remote_session.sh;chmod +x remote_session.sh;vi remote_session.sh

Add the following, modified to suit your personal circumstances:

#!/bin/bash

Xnest :3 -ac -geometry 1500x990 &
export DISPLAY=:3
ssh HOSTNAME_OF_REMOTE_SERVER lxsession

The bash script is very simple and is in three parts.
The first line calls Xnest, starting a new X11 server, fully windowed with the dimensions 1500×900, you will need to adjust this to your preferred resolution. Then export will tell all your graphical programs to use Xnest, rather than your original X11 server. This will only apply to programs run from your current terminal. Finally, the last command, ssh will connect remotely to your server and run lxsession, the lxde session manager. If it is all successful, you should have a new window appear with a full desktop session appearing in it. 

Run the script by typing at the terminal:

./remote_session.sh

Lxde Session running in Xnest

Your session may not have the nice background picture, which i set to Greenish_by_EstebanMitnick.jpg in /usr/share/backgrounds.

Once finished working in the remote session, simply close your programs like normal, and close the Xnest window.

Connecting From Windows:

The process is very similar to the above. I’ve assumed you’ve installed VcXsrv and are somewhat familiar with Putty.

Running a Single Program:

To run single program so that it integrates into your Windows environment, launch the VcXsrv program by double-clicking its icon on the desktop. Its icon will appear in your notifications area. Open putty, and ensure that Enable X11 forwarding is ticked in the putty options before connecting to your box:

Tick the box!

Once logged in, run your graphical program via putty, and it should appear magically in front of you! (This is my preferred way to access virt-manager from a windows computer).

Running an Entire Session:

The process to connect with putty is the same, except that instead of running a graphical program, say gparted, you start a session manager, lxsession, for instance.

Additionally, you will need to use Xlaunch which will ask you to configure your X11 server.

Select the One Large Window option when prompted.

Most of the options that XLaunch presents can be clicked through without modification, just ensure you choose ‘One Large Window’.

Once you’ve run VcXsrv as outlined above, log in with putty and run your session manager: lxsession and boom! You should have an entire remote session running on your windows computer.

Remote Linux on Windows!