Migrating from VirtualBox To KVM

vbox_logo2_gradientAs written previously: There are performance benefits to be had by switching from VirtualBox to KVM. And now, after making the switch I can firmly say that not only are the performance benefits noticeable, the configuration of automatic startup and, prima facie, backups, seems to be much easier to establish and use.

I’ve grown very fond of Oracle’s Virtualbox but given that it’s
more of a prosumer product rather than an enterprise one, it’s only  fair that I learn how to use it’s bigger brother KVM.

The process of switching to KVM itself was very simple, all things considered. The process I followed, after troubleshooting the various stages worked like this:

  1. Stop and backup all the virtualbox VMs.
  2. Convert the virtual disks from a virtualbox to kvm format.
  3. Create the virtual machines using virt-kvmbanner-logo3manager.
  4. Removed vboxtool and configs
  5. Create a bridge interface on the KVM Host
  6. Set each of the virtual machines to use the new bridge interface to connect to the internet and local network.
  7. Configure each VM’s network interface to use the new network interface.

In Detail:

Stopping the machines was easy, simply ssh into them and run shutdown -h now.

Backup the machines using the clone option in VirtualBox.

On the Hypervisor, navigate to the virtual machine directory (usually /home/user/VirtualBox VMs/ and create a new disk image from the vdi files like this:

qemu-img convert -f vdi -O qcow2 VIRTUALBOX.vdi KVM.qcow2

Thanks to this website for the useful tip. At this point I moved each of my VM disks to a new separate directory. This wasn’t strictly necessary, it’s just neater!

Use virt-manager to then create the virtual machines. The process is intuitive. Be sure to enable bringing up at boot. It was at this point that I ran into trouble. By default the virtual machines cannot talk to the host which is a problem if the host is also a file server. To get around this I had to modify the network config on the host. The KVM network page provided information on how to achieve this. Ultimately, you create network bridge then set each of the VMs to use that bridge. Below is my modified interfaces file on my Ubuntu 16.04 VM host:

# The primary network interface
#bridge to allow the VMs and the host to communicate
auto br0
iface br0 inet static
address 10.60.204.130
netmask 255.255.255.128
broadcast 10.60.204.255
gateway 10.60.204.129
dns-nameservers 10.60.204.133 8.8.8.8
dns-search warbelnet.local
bridge_ports enp6s0
bridge_stp off
bridge_maxwait 0
bridge_fd 0

Below is the configuration in virt-manager for the network in one of my VMs:

bridge_settings2

This was testing and working.

As my virtual machines are all running Ubuntu 16.04 the network interfaces file needed to be updated as the interface name changes after a hardware change.

Finally, I uninstalled Virtualbox, removed vboxtool (which I had been using to automatically start the Virtualbox VMs), removed vboxtool’s config from /etc/ and restarted everything to test.

Very happy to say it’s been quite a success!

Adding a New Domain and Securing it with SSL

This week my wife asked me to create for her a blog. As such I’ve had to rejig the www server to make space for her new domain.

The process is quite simple:

  1. Create the new user on the www server so she has sftp access.
  2. Create a mailbox, and mysql database for the new user.
  3. Create the directory structure, copy the latest wordpress to it and set file permissions.
  4. Create the DNS entries in my DNS provider, and locally on my home DNS
  5. Copy and edit my blog’s apache config files.
  6. Enable the new site -without SSL
  7. Update the Let’s Encrypt certificate files with the new domain
  8. Enable the SSL website.
  9. Configure wordpress.

In detail:

Create a new user on the web server with adduser -D

Depending on your setup, create a new mailbox, if you like and create a new database and user. I use phpmyadmin and postfixadmin for these tasks. Remember to note down the passwords and make them secure! Use a random password generator if needs must.

I created the directory structure in /var/www/bel.warbel.net/ moving forward, I think it would be more secure to have user’s websites stored in their home directories and then have the users jailed to stop access to the wider system. It would also make sense to have the sub domain match their username for simplicity’s sake. Be sure to change ownership once you’ve copied in the latest word press: chown USER:www-data /var/www/bel.warbel.net -R

WordPress (as www-data) will need write permissions on the sub directories particularly in the data directories to allow for downloading plugins and themes. Be sure to chmod g+w those directories.

At this point, if you have not done so already, create the DNS entries for your site. For me this meant updating my internal DNS records with a CNAME for bel.warbel.net to point to www.warbel.net, which I replicated on my own DNS hosts: https://www.noip.com/ who I recommend. As I do not have a static IP address, I use their dyndns services on my router.

Next, I copied the /etc/apache2/sites-available/blog.warbel.net.conf and blog.warbel.net-le-ssl.conf and renamed them to bel.warbel.net.conf and bel.warbel.net-le-ssl.conf respectively. The let’s encrypt program will, initially, not expect to see a SSL site, so I commented out the redirects in the non-ssl file and updated the config file for all the references to the hostname and root directories.

Enable the new site: a2ensite bel.warbel.net; service apache2 reload

Run the ssl certificate generator with all the domains you need:

sudo letsencrypt certonly –webroot -w /var/www/html -d www.warbel.net -w /var/www/bel.warbel.net -d bel.warbel.net -w /var/www/blog.warbel.net -d blog.warbel.net

If successful, it will show you a screen, prompting you to agree to update the certificate with the new domain:

Lets Encypt

At this point, it is safe and appropriate to enable the ssl site with: a2ensite bel.warbel.net-le-ssl.conf; service apache2 reload

Be sure to edit the non-ssl site’s config and re-enable forced ssl.

Finally, configure the new wordpress site. I found that to enable uploading files (updates etc) I needed to add a line to wp-config.php:

define(‘FS_METHOD’, ‘direct’);

Migrating from Virtualbox to KVM

After doing some much needed research into virtualisation on Linux, it’s become apparent that I should migrate my virtual machines from Virtualbox to KVM. KVM has significant performance benefits and it is a solid ‘production’ system. It’s also clear that if I want to advance my technical skills in the enterprise Linux space, then I need to learn more about KVM and implement it on my systems.

I love Virtualbox because it is cross platform- I can create a VM on a Linux host, and move it to a windows host if needed. The remote desktop server built into the program, too, is a very handy feature. However I will admit, that I very rarely will spin up a VM on Linux and move it to another OS (if ever) and since discovering MobaXTerm on windows, I can now easily, from any windows machine (read: my laptops) access the virt-manager X window session of a running VM on KVM. As an aside, MobaXTerm is an amazing program and compliments putty quite nicely!

My concerns so far about the migration are 3 fold:

  1. I need to convert the disk images into a native format for KVM and virt-manager to use.
  2. I currently automate my VM startup and shutdown with VBoxTool so I will need to either find a preexisting automation solution, or create my own init scripts.
  3. Virtualised hardware: Clearly Virtualbox and KVM will virtualise hardware in their own ways, so I need to be sure that the machines can migrate to the new environment and still work. I’m mostly concerned with networking as experience has taught me that Linux is very forgiving of hardware changes, however with the new naming conventions of Ethernet devices, my network configs will need to be updated.

Using a Raspberry Pi as a cheap security system

A small project this weekend. I used my hitherto untouched Raspberry Pi 2 as a security system. The process is reasonably straight forward to anyone who is already familiar with the Raspberry Pi.

I have two web cams which are attached to the Pi via an external powered usb hub. This is necessary as the device does not have enough power to run itself and the cameras. It also has a USB 2.4G wireless dongle.

I’ve installed MotionEye onto the Pi’s SD card. Again, simply using:
sudo dd if=MotionEyesIMGFile of=/dev/sdX
did the job.

Once the device was setup using the wired network, it could be secured with an admin password, by default it has no password and it can be added to the wireless network. All of the settings can be accessed by clicking the menu icon in the top left hand corner, and the process is intuitive, as is adding the cameras.

The only real difficulty encountered was allowing it to function behind the reverse proxy. To do so relied on having to edit the /etc/motioneye.conf file to include the line:
base_path /security

I had tried to ssh into the device to make the changes, however the file system is set to RO by default, so I ended up removing the microSD card and editing the files on my desktop.

That then needed to be mirrored in my apache reverse config files:
ProxyPass /security http://10.60.204.xxx
ProxyPassReverse /security http://10.60.204.xxx

And done! The new security system is accessible via ssl at: https://www.warbel.net/security/

Enabling Secure SSL to Roundcube, Postfixadmin, Sonarr and Deluge with an Apache Reverse Proxy

As discussed in earlier posts I have hit the limits of what you might call ‘standard’ hosting. As I have multiple machines, both physical and virtual and only a single externally accessible IP address I needed to figure out how to allow access to certain URLs and applications on these machines without relying on NAT and port forwarding rules at the router (layer 3 routing). I also needed to do this securely using SSL.

This blog entry will outline that process that I followed to modify a significant number of applications to allow access through a single apache host that handled all incoming requests, and secures them with SSL to the client.

As my mail server already had SSL configured it needed to be disabled and go back to only accepting traffic on port 80. As all incoming traffic will be from the reverse proxy and a firewall will block all other requests, this will be secure.

Begin by creating a backup of the VM of the mail server. Then on the mail server (once running again):

Disable the SSL site:
sudo a2dissite default-ssl.conf

Disable SSL forwarding in the .htaccess file in /var/www/html/ such that the file will now look like:

#RewriteEngine On

# Redirect all HTTP traffic to HTTPS.
#RewriteCond %{HTTPS} !=on
#RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

# Send / to /roundcube.
#RewriteRule ^/?$ /roundcube [L]

Edit the roudcube config/var/www/html/roundcube/config/config.inc.php to stop it from forcing ssl. Look for and change the line to false like below:
$config[‘force_https’] = false;

Remove https redirect from the default config in apache and remove the servername directive:
sudo vi /etc/apache2/sites-enabled/000-default.conf

#RewriteEngine on
#RewriteCond %{SERVER_NAME} =mail.warbel.net
#RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
#ServerName mail.warbel.net

This is important as we dont want incoming request to again be forwarded.

To fix forwarding/proxing issues with Owncloud there is ample documetion available on their site. The short of it is to edit /var/www/owncloud/config/config.php to include the following lines (modify to suit):

#information included to fix reverse proxying
#https://doc.owncloud.org/server/8.2/admin_manual/configuration_server/reverse_proxy_configuration.html
‘trusted_proxies’ => [‘0.0.0.0’],
‘overwritehost’ => ‘www.warbel.net’,
‘overwriteprotocol’ => ‘https’,
‘overwritewebroot’ => ‘/owncloud’,
‘overwritecondaddr’ => ‘^00\.0\.0\.0$’,

Then enable https rewriting in the /var/www/owncloud/.htaccess file. Owncloud is smart enough to know when its being accessed via proxy.
Change Edit:  RewriteCond %{HTTPS} off
to  RewriteCond %{HTTPS} on

Just to be safe, unload ssl and restart apache2.
sudo a2dismod ssl; sudo service apache2 restart

Test that ssl is disabled. I discovered that I needed to clear my cache/history, as Chrome would attempt to redirect to https as per my broswer history.

I have another server that handles Deluge and Sonar. I won’t go into depth here, but if you want those applications to be accessible via a reverse proxy then stop the programs and edit their configs. In Sonarr:

Edit the config for Sonarr, /home/<user>/.config/Nzbdrone/config.xml
Edit the line <UrlBase></UrlBase> to
<UrlBase>/sonarr</UrlBase>

In Deluge, edit the conf file: /home/<user>/.config/web.conf edit the line:
“base”: “/” to
“base”: “/deluge”

Start your services again and check they’re functional.

Now the fun part. (This should work, assuming you have letsencrypt already enabled). Enable the proxy modules in apache2:
sudo a2enmod proxy proxy_html proxy_connect; sudo service apache2 restart

Edit your main site’s ssl config in /etc/apache2/sites-enabled/sitename-ssl.config to include the following (again, edit to suit):

ProxyRequests Off
ProxyPreserveHost On

<Proxy *>
Order deny,allow
Allow from all
</Proxy>

ProxyPass /sonarr http://ipaddress:8989/sonarr
ProxyPassReverse /sonarr http://ipaddress:8989/sonarr

#Location directive included to stop unauthorised access to sonarr
<Location /sonarr>
AuthType Basic
AuthName “Sonarr System”
AuthBasicProvider file
AuthUserFile “/etc/apache2/htpasswd”
Require valid-user
</Location>
ProxyPass /deluge http://ipaddress:8112/
ProxyPassReverse /deluge http://ipaddress:8112/

ProxyPass /roundcube/ http://ipaddress/roundcube/
ProxyPassReverse /roundcube/ http://ipaddress/roundcube/
Redirect permanent /roundcube /roundcube/
ProxyPass /postfixadmin http://ipaddress/postfixadmin
ProxyPassReverse /postfixadmin http://ipaddress/postfixadmin

ProxyPass /owncloud/ http://ipaddress/owncloud/
ProxyPassReverse /owncloud/ http://ipaddress/owncloud/
Redirect permanent /owncloud /owncloud/index.php

A few notes on the directives:

  • As Sonarr does not have authentication, I’ve used some resources to make it secure: If necessary, generate a strong random password for apache htaccess file here: http://passwordsgenerator.net/ And use this tool to create your htaccess file contents: http://www.htaccesstools.com/htpasswd-generator/
  • The redirect permanent is useful is people try to access /roundcube, which won’t work, rather than /roundcube/ which will.

At this point you should be able restart everything and it will work (at least it did for me!). Please leave any questions or comments below.

Edit: You may want to stop people inside the network, depending on your dns etc requirements from accessing the mail server on port 80. If so, the following line in your firewall should do the trick, give or take:
$IPTABLES -A INPUT -p tcp –dport 80 ! -s ip.of.www.proxy -j DROP

Summarising the Process

The goal of this project, to some extent, has been to create an environment that functions similarly to a Microsoft exchange +OWA environment using only open-source programs and in a linux environment. The solution had to be robust and scalable and similar to what a small to medium business would use. For the most part I’ve achieved that, however I’ll admit that at this point the lack of a backup solution is a concern. But we’ll put that aside for the time being (due to financial constraints).

I would expect that most end users would be able to adopt the software quickly and integrate it into their workflows. My project would cover most of the basic functions of an office environment, however clearly does not aim to replace or integrate with other software such as HP TRIM, Docushare, Objective etc. Furthermore as far as I’m aware my environment would not be able to support the integration of those programs into the mail server. I digress.

To recap: I’ve been able to create a postfix/dovecot IMAP mail server, secured with SSL. The web interface allows password on the MYSQL backend to be reset by the user. Shared calendar functions are supported by Own Cloud which also supports file sharing and limited online editing of documents. Again, all secured by true SSL thanks to let’s encrypt.

The backend also has web hosting capabilities and useful user management and administration functions. PhpMyAdmin (locked down to the local subnet), PostFixAdmin, the aforementioned Own Cloud are all installed and offer reasonable admin functions similar to to an exchange/AD environment. Any L1 technician would be able to use the backends without too much hassle.

So the final stage will be to setup two more things.

  1. Mailman or similar: PostFixAdmin handles mail aliases well (I would say they’re easier to create than Office365 with ADFS integration) however distribution groups are not supported.
  2. Setting up apache on the web server to handle all incoming requests to the mail server by using the proxy modules. The picture below should demonstrate:

mail-and-web-proxy-diagram

Fundamentally, the web server will encrypt all the incoming traffic or push clients to SSL connections. It will map the following from the mail server:

  • http://mail.warbel.net/owncloud to https://www.warbel.net/owncloud
  • http://mail.warbel.net/roundcube to https://www.warbel.net/roundcube
  • http://mail.warbel.net/postfixadmin to https://www.warbel.net/postfixadmin

The traffice between the mail server and the www server will technically be unencrypted. Given that they’re both VMs running on the same host though, this presents a limited security hole. The mail server will also be configured to firewall off all incoming connections on port 80 and 443 that are not coming from the web server.

Next Step: Reverse Proxy with Apache2

So my next challenge which has so far been a difficult one, is to set up apache 2 as a reverse proxy. The technical challenge is that my mail server sits behind a firewall on a private network. Technically, so does my web-server. All web traffic (read http/https-80/443) is currently forwarded to my web server. It hosts two websites: blog.warbel.net and www.warbel.net – both with SSL enabled.

My mail server also runs apache and is secured in a similar fashion – all requests on port 80 are forwarded to port 443. It has a valid SSL certificate for mail.warbel.net.

To demonstrate the challenge, I have unashamedly borrowed this graphic from Atlassian:

reverseproxy

In their example they have three internal servers with the reverse proxy in the middle, accessing the services on the private network on behalf of the client. In my scenario, the reverse proxy is also a web server in its own right, and only needs to forward SSL requests to the mail server. There are, on the web server, only two URLs that are important. https://mail.warbel.net/roundcube and https://mail.warbel.net/postfixadmin/. I would prefer that I keep the hostname mail.warbel.net intact however as a last resort, proxying the two URLs would work just as well.

Looking ahead, I can see that setting up proxying to just the sub directories will result in SSL errors – apache on mail is configured with only mail.warbel.net as the registered domain name. However I’m yet to figure out how to use apache on the web server to simply forward ssl requests to mail, rather than try and negotiate them itself.

SSL Success – How to enable SSL Certificates with Let’s Encrypt

Having created my blog on my VM host and my mail server on a VM, I decided to move my hosting services to a VM. The process was largely smooth and involved setting up and securing PHP myadmin on my new VM first, then setting up zabbix and finally, transferring all the configs and databases to the new system. The last stage was simply me disabling and removing apache.

Having successfully completed the migration and re-configured port forwarding on my router I now have a web server without SSL. Enabling good SSL is now quite easy (and free). There are only a few steps neccessary as the process is largely automatic. On Ubuntu install letsencrypt from the repo:

sudo apt-get install python-letsencrypt-apache

Then generate the certs:

sudo letsencrypt certonly –webroot -w /var/www/html -d mail.warbel.net
(In the above example I’m working on the mail server, however the same process was true for my web server)

Then enable let’s encrypt on apache:

sudo letsencrypt run –apache –redirect
(This forces apache to use SSL everywhere)

It’s worth explaining that letsencrypt will generate new conf files from your currently active conf files (it will look at /etc/apache2/sites-enabled) and then make new ssl enabled conf files. It will then redirect all http traffic to https.

Add the below line to root’s crontab:

0 5,17 * * * letsencrypt renew >/dev/null 2>&1

This will run letsencrypt every day at 5am and 5pm to check that the certificate is valid. To edit crontab as root use the command:

sudo crontab -e

At this point, restart apache (sudo service apache2 restart) will then work with SSL only.

A final note/step on my mail server: I had attempted to set up postfix and dovecot with the new ssl certificates. Dovecot was easy enough to configure. I edited the /etc/dovecot/conf.d/10-ssl.conf file – specifically, setting the following options to:

ssl_cert = </etc/letsencrypt/live/mail.warbel.net/fullchain.pem
ssl_key = </etc/letsencrypt/live/mail.warbel.net/privkey.pem

Note that the latest ssl certificate and keys will be placed into the ‘live’ folder.

I was as yet, unable to configure Postfix with the new SSL certificates using a similar method. It is still using the snake-oil certificates. This however is only an issue when setting up a mail account for the first time on a PC or device and the work around is easy enough – force the client to accept the certificates. More importantly when accessing the webmail the client is faced with a green/happy padlock indicating that the site is secure, rather than a dire warning of a security breech.

Next Project – Enabling SSL

So my current project has been to setup a mail server and I think I’ve been largely successful in that goal.

My next step (for those interested) will be to enable true ssl with signed certificates. The other issue I have revolves around fact that my website’s subdomains mail and blog are hosted on a virtual machine, mail.warbel.net, whereas www.warbel.net is hosted on the VM host itself (atlas). The trick will be to enable the webserver on atlas to operate as a reverse proxy that will automatically accept all incoming port 443 and 80 connections then forward them, or accept traffic to the appropriate sub-domain.

There are free signed certificate sites available, namely https://letsencrypt.org/ which I will use to achieve these ends.

Enabling User-Initiated Password Resets with Roundcube on Ubuntu 16.04

Another key problem I’ve encountered on my journey to making a fully-featured mail server is that it is currently impossible for end-users to set their own passwords.

If you’ve followed along. You’ll know that I’ve followed this blog on how to setup a mail server. Please also look at the previous posts that I’ve written that outline how to setup phpmyadmin, or set it up yourself to make things a little easier.

Again, after some googling, I found some instructions that guided me how to allow users to change their own passwords, and modified them to suit.

Firstly, edit /etc/roundcube/config.inc.php
Find the line: $rcmail_config[‘plugins’] = array(‘managesieve’); and change it to:
$rcmail_config[‘plugins’] = array(‘managesieve’,’password’,);
Thus enabling the password plugin. If you restart the apache service (probably not necessary) and log into roundcube, the option to reset your password will be under settings under the password tab.

Next we need to give access to the plugin to the right elevated credentials on the database and give it the right SQL query to use. In order to limit the damage that a malicious person might inflict I’ve decided to make a new user on the database with limited access to ONLY the mailbox/user database and only the power to change password of the single user currently logged in.

Creating a user can be done via phpmyadmin, or if you’ve come this far, by doing it at the command line.

The key point here is to only allow access to the user (mailbox) table in the database. Again, this can be done by using phpmyadmin or if you’re in a hurry by using the the SQL query:

GRANT SELECT (`username`), UPDATE (`password`) ON `mail`.`mailbox` TO ‘THEUSERNAME’@’localhost’;

Next, we need to edit the settings in /etc/roundcube/plugins/password/config.inc.php.

The file is originally empty, so place inside the php brackets:

$config[‘password_driver’] = ‘sql’;
$config[‘password_confirm_current’] = true;
$config[‘password_minimum_length’] = 8;
$config[‘password_require_nonalpha’] = true;
$config[‘password_log’] = false;
$config[‘password_login_exceptions’] = null;
$config[‘password_hosts’] = array(‘localhost’);
$config[‘password_force_save’] = true;
$config[‘password_algorithm’] = ‘md5-crypt’;
// SQL Driver options
$config[‘password_db_dsn’] = ‘mysql://USER:PASSWORD@localhost/mail’;

// SQL Update Query
$config[‘password_query’] = ‘UPDATE mailbox SET password=%P WHERE mailbox.username=%u LIMIT 1’;

And thats it! If you have phpadmin, I suggest you keep a record of the original hashed password of your test user so you can then repair any damage you might do while troubleshooting.