Automating Letsencrypt Wildcard Certificate renewal with Mail in a Box.

I use letsencrypt, a fantastic free service to secure my websites. However, like many people I have multiple sub-domains for my blogs, gitea and whatever else I fancy spinning up. These are all hosted on a separate box, not my MIAB box. So when Letsencrypt announced their wildcard certificate I jumped on board.
The only problem I had with wildcard certificates were the extra steps required automate the whole process. The issue was that Letsencrypt (or certibot, the program that does the work) required txt records to be updated in publicly available DNS records.
As I also use Mail in a Box for my public DNS and email I had to write some scripts and entries in crontab to automate the renewal process. Thankfully Mail-in-a-box has an API for doing exactly this.

Notes and Caveats:

  • I’ve seen other blog entries that do the same thing (minus the automation) for a single certificate. However when I generated my certs some time ago, I followed these instructions. As such my cleanup.sh script did not work as expected, so run it separately, not via the –manual-cleanup-hook call.
  • Read the documentation if you have problems: https://certbot.eff.org/docs/using.html#pre-and-post-validation-hooks
  • If you figure out how to make the cleanup script work as a hook – please let me know in the comments 😉
  • I assume you already have certibot/letsencrypt installed
  • You will need to substitute your credentials in the scripts – I have a specific account I use for just this, so my personal email/admin account isn’t compromised with the password stored in plain text.
  • Make sure your scripts have the executable bit set (chmod +x).
  • My crontab entries will email root, which is aliased out, to my personal email address.
  • Cron jobs (see comments in scripts) will run at midnight and 5 minutes past midnight respectively, every 5 days.

The Scripts:

These are super basic:
/root/renewal.sh

#!/bin/bash
#add to crontab like this:
#0 0 */5 * * /root/renewal.sh | mail s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
sudo certbot certonly -n --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --manual --manual-authhook ./authenticator.sh --preferred-challenges dns -d "dom.ain, *.dom.ain"

/root/authenticator.sh

 #!/bin/bash

curl -s -X PUT -d "$CERTBOT_VALIDATION" --user user@dom.ain:<Password>  https://<your.maib.url>/admin/dns/custom/_acme-challenge.dom.ain/txt 

/root/cleanup.sh

#!/bin/bash
#Add to crontab:
#5 0 */5 * * /root/renewal.sh | mail -s "Lets Encrypt Certificate Renewal" root >/dev/null 2>&1
# get the txt record
TOREMOVE=`curl -s -X GET --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt  | grep "value" | awk '{print $2}'| sed 's/"//g'`
echo "removing $TOREMOVE"
curl -s -X DELETE -d "$TOREMOVE" --user user@dom.ain:<password> https://<your_MIAB_Server>/admin/dns/custom/_acme-dom.ian/txt 
service apache2 restart

Finally, if you want to check the certificate run:

certbot certificates

Improved Speedtest-CLI

I’ve recently changed ISPs and wanted to see how my speeds compared to those reported by my ISP. However the speedtest utility on Linux only tests one server at a time. While it is possible to specify a server, and get a list of local servers to test, I would rather automate the process. And I did!

I’ve made a speedtest wrapper in Python 3 and added it to my private gitea repository: https://gitea.warbel.net/wargus/improved-speedtest

By default the program will search for all servers matching a specified string, then test each one of them one after the other.

Self-hosting Git with Gitea, Lxc and Apache

This is a quick tutorial on how to setup Gitea on Ubuntu 18.04.2 using Linux Containers (LXC). As I’ve already setup LXC, this will assume you already have a working configuration. I’ve also assumed you have Apache 2 working with the proxy modules running.

Why Use Gitea?

Gitea is an opensource replacement of Github, with many of the same features. While I personally use and enjoy Github, I’ve always wanted the freedom to keep my code on my server. If Github ever changes, I’ll always have my own repositories protected and managed in a way that I choose.

Getting Started:

In my example, I’ll be creating a new Ubuntu LXC container from the image ‘ubuntu’ called gitea and then opening a shell in the new container.

$ lxc launch ubuntu gitea; lxc exec gitea -- bash

From the shell, update all the packages, install bash-completion, create a new non-super user account for yourself and for the new service, gitea:

# apt update; apt full-upgrade -y; apt install bash-completion ssh
# adduser <your-username>;adduser git --system;sudo addgroup git
# exit

Log into your new container via ssh with your unprivileged account. First check the IP address with lxc list:

lxc list
ssh <username>@ip_address

Create the directories you need for gitea and change the permissions:

$ sudo mkdir -p /etc/gitea /var/lib/gitea
$ sudo chown git:git /etc/gitea /var/lib/gitea

Download gitea, check the version, as the current version was 1.7.0 at the time of writing:

$ cd /usr/local/bin
$ sudo wget -O gitea https://dl.gitea.io/gitea/1.7.0/gitea-1.7.0-linux-amd64

At this point you should be good to follow the instructions on Gitea to setup the service: https://docs.gitea.io/en-us/linux-service/

At this point, the last step is to start the service and get the basic configuration done. You’ll need to log into the service on the ipaddress:3000 of the container.
I’ve chosen to setup gitea with a sqlite3 db for simplicity. In larger organisations, you might consider using a mariadb/mysql and setting up LDAP integration.

Setting up Apache

To simplify and easily identify my sub-domain configuration, I have separate config files for each website and I recommend you doing the same. Create your config file in /etc/apache2/sites-available. As I have a wildcard certificate from letsencrypt, I can simply reuse large chunks of configuration from other virtual hosts. In my example, my gitea website is called gitea.warbel.net.

Apache is running on another LXC container, which I’ve logged into.

$ sudo touch /etc/apache2/sites-available/006-gitea.warbel.net.conf

Open the new file and copy in the config. At this point consider looking at the documentation from gitea: https://docs.gitea.io/en-us/reverse-proxies/
However my configuration is here. Note that it actually includes two configurations. The first on port 80, redirects the user to port 443. That section follows on. As it reverse proxies the entire subdomain, pay close attention to the proxypass and proxypassreverse directives; a missplaced ‘/’ can really screw things up!

Setting Up Gitea

At this point you should be able to connect to your gitea website on its subdomain. As the documentation on the gitea website makes clear, be sure to make gitea aware that it is behind a proxy by setting the directive in the application config:
[server] ROOT_URL = http://git.example.com/git/

Other Considerations

I set my gitea up with sqlite3 as the database driver, for simplicity’s sake. Given I already have mariadb established I could have configured it with a database, but give the traffic considerations (ie, I’ll likely be the only contributor) its seemed like overkill.

Additionally, I have reconfigured my LXC container to set itself up with a static IP address and added its IP address to my ansible update scripts for maintenance/documentation purposes. I’ve also setup gitea with a email address on my mail server to ensure that it can email me notifications.

Troubleshooting

Most of the problems I faced were quite early on – and mostly centred around directory and permissions. If you run the gitea service manually, check the logs/output of systemctl for any errors.

Connecting to the MET Office Weather Observation Website

A short post on contributing to the MET Weather Observation Website.
(What to do with all this data I have)

A colleague pointed out that the Bureau of Meteorology is contributing to a massive online project hosted by the MET Office that collects weather observation information from the community, link here, and that I should contribute my data.

My only problem was that my data is collected from a bespoke weather station and lacked a mechanism that could push the data to the MET Office’s servers. So using their documentation, I wrote one in Python and set it up as a cron job on my server.

Here is my weather station website on the MET site. And a link to my script, in Python that does the work.

Building an Arduino Weather Station with ELK Stack

Having recently been introduced to the Elastic, Logstash, Kibana (ELK) stack by a colleague and having wanted to build a home-brew weather station for some time, I decided to combine both projects. There was surprisingly little information available online about building an arduino weather station and although it had been done, everyone seems to have their own take, and hardware requirements. Two sites provided useful information for this project:

  1. The manufacturer’s wiki of the weather station components and;
  2. This blog entry, written by a staff member at Elastic.co.

As a side note, once I figured out how to use ELK Stack and setup my data types, I also integrated information from a Fronius Inverter too. The inverter has a well documented API that outputs its data in JSON format. I used logstash to periodically pull the data from the inverter and output it into Elasticsearch. My Arduino is setup in a similar way to the Inverter: It hosts a simple website that only displays the current weather data in a JSON format.

Things you will need:

  • An ELK stack. Elastic co provide significant documentation on setting up Elastic stack, so for the sake of simplicity I wont be covering this here.
  • A weather station with anemometer from dfrobot or a supplier, I bought my equipment from littlebird electronics. 
  • An Arduino with an Ethernet shield, or an Freetronics Etherten.
  • A passive PoE adaptor to supply power to your Arduino.
  • The latest version of the Arduino IDE, and the Time library added into your Arduino environment. I prefer to do my coding in Microsoft Visual Studio Code which has plugin support for Arduino.
  • Cat6 cable, telephone cabling (with RJ11 headers), some M->F breadboard cabling, waterproofed project box, appropriate mast that can support the weather station if needed.
  • Access to my source code on Github.

Building and Programming the Hardware:

In my case, I opted to build a new, more stable mast, entirely separate from my TV antenna mast. Bunnings was able to provide the mast equipment I needed. As you may notice, I have also mounted a Ubiquiti Rocket M5 with a 120 degree antenna (nothing to do with this project!).

New Mast

Unfortunately the serial cables provided in the weather station kit were not long enough to do anything useful, so I had to buy two RJ11 joiners, RJ11 headers and cabling to extend the cabling to the eves where I mounted the electronics.

Roofing cabling

Project box mounted under the eves

 

 

 

 

 

 

 

 

It ended up being quite a task running the cabling to the eves, but it was worth it in the end! As I wanted to mount the sensors outside, I opted to go with a power-over-ethernet (PoE) system which required a Cat6 cable and passive PoE injectors. The weather station sensors on the roof are connected to the weather station circuit board via the two RJ11 cables Pictured below:

  

The Arduino and sensors were mounted DIY fashion inside a project box. I used a dremel to cut a hole in the plastic lid to feed the cabling through so that it would sit flush with the eves.

Programming:

Arduino:

In order to collect and output the data I had to recycle some code from the Arduino website, and some code from other places (credit in the code). My contribution was building in some error checking. The sensor unit, as it turned out, was not always accurate and would quite often return garbage data. In order to fix this I programmed the arduino to check the data twice and compare the results. Both results had to pass a sanity check: Here in Perth, WA it never gets below -5 or above 55, nor does the barometric pressure wildly fluctuate either. So, only report the data if the data was within range, and the values had to be reasonable close to each other. Strictly speaking this will not stop all bad data, but it seems to do a good enough job 99.99% of the time. See above for a link to my github project.

ELK Stack:

I’ve provided on github the pipeline configuration you will need in order to use logstash to connect to your arduino and pull the json data. If you don’t have an openweathermap account, it is possible to remove the appropriate section from the 01-http-weather.conf file. You will need to add the files to your /etc/logstash/conf.d directory and add the pipelines to logstash by modifying the pipelines.yml file.

The last step will be to use the code snippets in the mappings_script file.  From within the dev tools section of  Kibana, run the below PUT command. This will tell Elasticsearch about the data coming in and correctly map the data types:

PUT /weather/_mappings/doc
{
"properties": {
"coord": {"type": "geo_point"},
"coordlocal": {"type": "geo_point"},
"dt": {"type": "date"},
"localdt": {"type": "date"},
"sys.sunrise": {"type": "date"},
"sys.sunset": {"type": "date"},
"clouds.all": {"type": "float"},
"main.humidity": {"type": "float"},
"main.temp": {"type": "float"},
"main.pressure": {"type": "float"},
"rain.3hr": {"type": "float"},
"visibility": {"type": "float"},
"weather.humidity": {"type": "float"},
"weather.temp": {"type": "float"},
"weather.pressure": {"type": "float"},
"wind.deg": {"type": "float"},
"wind.speed": {"type": "float"},
"wind.localspeed": {"type": "float"},
"wind.localdeg": {"type": "float"},
"wind.localgust": {"type": "float"},
"localrain.1h": {"type": "float"},
"localrain.24h": {"type": "float"}
}
}

And that is it!

At this point, you should have a working weather station with log stash pulling the data and pushing it to elasticsearch. I haven’t detailed how to build the graphs or dashboards to display the data, but there is plenty of documentation available online. 

Sensor Inaccuracy/Future Improvements:

After mounting the sensors I had discovered that the heat generated from the electronics does interfere with the reading. The top line represents the temperature read from my weather station, and the bottom line is data from OpenWeatherMap.

The solution, when I have time later, will be to drill more holes in the project box to allow better air flow and partition the arduino away from the sensors to better isolate the heat that is generated.

 

Using Xnest or Putty/VcXsrv to Start a Full Remote Session

As useful as ssh and command-line (CLI) tools are when using a Linux/Ubuntu system remotely, sometimes it simply isn’t enough. And I dare say, sometimes it’s better to use a graphical tool to do something than it is to use the CLI alternative- parted/gparted: I’m looking at you!

Sometimes you may even want to use a full graphical desktop remotely (audible gasps!). This can be handy if you’re developing a desktop system on a remote server or virtual machine and want to experience the system the same way as the end user. I’ve found it useful when developing Linux code from a Windows desktop, and want to run a full development environment remotely. Setting up a remote session too, I’ve found, resolves some quirky display issues when using a proprietary program with a GUI remotely.

In this tutorial I’ll show you how to tunnel X11 through ssh to both a computer running Windows with Putty and VcXsrv and a Linux computer with Xnest. I will show you how to tunnel and entire desktop session or an individual program.

Before we begin, configure the remote server:

I’m assuming that port 22 is open on your remote server and you are capable of logging into the server with ssh.

Edit the file /etc/ssh/sshd_config (with sudo if necessary)

sudo vi /etc/ssh/sshd_config

And ensure that X11Forwarding yes is set and set AddressFamily inet:

X11Forwarding yes
...
AddressFamily inet

WARNING: Setting AddressFamily to inet will force ssh to only use ipv4 addresses. If you use IPv6, then this tutorial is not for you!

Next, install lxde – a lightweight desktop environment:

sudo apt install lxde lxdesession 

Generally, I prefer using Linux Mint with Cinnamon, but LXDE provides a nice environment that does not use a lot of network bandwidth. There are alternatives through, and I do suggest you look into them if you’re interested.

Finally, restart ssh, log out and log back into your server to make sure everything is working.

sudo service ssh restart;exit

Lets start with the easier option first: Linux:

Running a single program remotely:

At this point your remote server is already configured to allow ssh +X11 forwarding. if you log into your remote server and run a graphical program (assuming it is installed) the program will magically appear in front of you. Be aware that it will take over your terminal unless you push it to the background with ampersand (&), for instance:

user@RemoteServer$ gparted &

If you experience trouble at this point, it is worth connecting with ssh using the -vv -X flags to see the error messages.

Running an entire session remotely: 

On your Linux/Ubuntu/Mint Desktop, open a terminal and install xnest:

sudo apt install xnest

Xnest is both an X11 server and X11 client. There is a lot written about it, so again, google is your friend if you want more information. Once insalled, create a new script called remote_session.sh, make it executable and edit it:

touch remote_session.sh;chmod +x remote_session.sh;vi remote_session.sh

Add the following, modified to suit your personal circumstances:

#!/bin/bash

Xnest :3 -ac -geometry 1500x990 &
export DISPLAY=:3
ssh HOSTNAME_OF_REMOTE_SERVER lxsession

The bash script is very simple and is in three parts.
The first line calls Xnest, starting a new X11 server, fully windowed with the dimensions 1500×900, you will need to adjust this to your preferred resolution. Then export will tell all your graphical programs to use Xnest, rather than your original X11 server. This will only apply to programs run from your current terminal. Finally, the last command, ssh will connect remotely to your server and run lxsession, the lxde session manager. If it is all successful, you should have a new window appear with a full desktop session appearing in it. 

Run the script by typing at the terminal:

./remote_session.sh

Lxde Session running in Xnest

Your session may not have the nice background picture, which i set to Greenish_by_EstebanMitnick.jpg in /usr/share/backgrounds.

Once finished working in the remote session, simply close your programs like normal, and close the Xnest window.

Connecting From Windows:

The process is very similar to the above. I’ve assumed you’ve installed VcXsrv and are somewhat familiar with Putty.

Running a Single Program:

To run single program so that it integrates into your Windows environment, launch the VcXsrv program by double-clicking its icon on the desktop. Its icon will appear in your notifications area. Open putty, and ensure that Enable X11 forwarding is ticked in the putty options before connecting to your box:

Tick the box!

Once logged in, run your graphical program via putty, and it should appear magically in front of you! (This is my preferred way to access virt-manager from a windows computer).

Running an Entire Session:

The process to connect with putty is the same, except that instead of running a graphical program, say gparted, you start a session manager, lxsession, for instance.

Additionally, you will need to use Xlaunch which will ask you to configure your X11 server.

Select the One Large Window option when prompted.

Most of the options that XLaunch presents can be clicked through without modification, just ensure you choose ‘One Large Window’.

Once you’ve run VcXsrv as outlined above, log in with putty and run your session manager: lxsession and boom! You should have an entire remote session running on your windows computer.

Remote Linux on Windows!

UWA Boat Club Event Photos – 2008

As per a recent request, the photos I took during 2008 of events attended by or organised by the University of Western Australia Boat Club (UWABC). Photos remain my property and cannot be reproduced without permission.

How to configure a Unifi Controller behind an Apache Reverse Proxy with LetsEncrypt

Background:

I had to do quite a bit of searching in order to get Unifi to work correctly behind an Apache reverse proxy. I found that many people had come up with their own solutions with various odd, to say the least, configuration options in Apache that were mostly unnecessary. It took a little more searching, but eventually I did find how to prevent the WSS error from appearing too.

Before Beginning:

I assume that you have:

  • Already configured Apache and Lets Encrypt previously.
  • DNS already configured correctly and you can easily add another sub-domain.
  • Already installed and configured Unifi Controller on a box, or VM somewhere.

As Unifi runs on a high (+1024) port, I installed the controller directly onto my Apache2 server.

By the end of the process you should have a functional Unifi controller on unfi.domain.com

Configuration:

Before beginning, ensure that you’ve created a new sudomain and pointed it to your public IP. Next, use lets encrypt to expand your certificate file to include the new domain. I usually run this in standalone mode and turn off apache2 while expanding the certificate.

sudo service apache2 stop
sudo letsencrypt certonly -d unifi.domain.com -d www.domain.com -d subdomain.domain.com

Once complete, start apache again.

Create a new site in /et/apache2/sites-available/ called unfi.domain.com-le-ssl.conf
Edit the file to contain the text below. Be sure to edit the references to your SSL certificate files, document root, servername, etc and IP address of your unifi host. Be aware that my unifi controller runs on the same host as my apache server. If needed, you can get the lets encrypt information from one of your other sites configuration files.

<IfModule mod_ssl.c>
<VirtualHost unifi.domain.com:443>
 # The ServerName directive sets the request scheme, hostname and port that
 # the server uses to identify itself. This is used when creating
 # redirection URLs. In the context of virtual hosts, the ServerName
 # specifies what hostname must appear in the request's Host: header to
 # match this virtual host. For the default virtual host (this file) this
 # value is not decisive as it is used as a last resort host regardless.
 # However, you must set it for any further virtual host explicitly.
 #ServerName www.example.com

ServerAdmin webmaster@domain.com
# DocumentRoot /var/www/html

# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
 # error, crit, alert, emerg.
 # It is also possible to configure the loglevel for particular
 # modules, e.g.
 #LogLevel info ssl:warn

ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/access.log combined

# For most configuration files from conf-available/, which are
 # enabled or disabled at a global level, it is possible to
 # include a line for only one particular virtual host. For example the
 # following line enables the CGI configuration for this host only
 # after it has been globally disabled with "a2disconf".
 #Include conf-available/serve-cgi-bin.conf
SSLCertificateFile /etc/letsencrypt/live/domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/domain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
ServerName unifi.domain.com

ProxyRequests Off
ProxyPreserveHost On

# HSTS (mod_headers is required) (15768000 seconds = 6 months)
Header always set Strict-Transport-Security "max-age=15768000"

<Proxy *>
Order deny,allow
Allow from all
</Proxy>

SSLProxyEngine On
SSLProxyVerify none

SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off

AllowEncodedSlashes NoDecode
ProxyPass "/wss/" "wss://127.0.0.1:8443/wss/"
ProxyPassReverse "/wss/" "wss://127.0.0.1:8443/wss/"
ProxyPass "/" "https://127.0.0.1:8443/"
ProxyPassReverse "/" "https://127.0.0.1:8443/"

</VirtualHost>

</IfModule>

Then enable the site with:

sudo a2ensite unifi.subdomain.com-le-ssl.conf;sudo service apache2 reload

And that should do it! Any questions or comments, please post below.

How to Email System Logs via the Terminal, Cron and SMTP

Background:

Every day I run a rsync job that transfers backups between two servers. The job is a two-part cronjob. As seen below:

30 23 * * * /home/<user>/rsync.sh >/dev/null 2>&1
0 6 * * * killall rsync >/dev/null 2>&1;

The job starts at 11:30pm and is killed at 6am. The script that it calls does the following:

#!/bin/bash
find /var/log/rsync/ -mtime +8 |xargs -I % sh -c 'rm -f %';
find /var/log/rsync/log.* |xargs -I % sh -c 'tar -rf /var/log/rsync/rsync.1.tar %; rm -f %';
rsync --bwlimit=1050 --protect-args --delete --size-only --copy-dirlinks --log-file=/var/log/rsync/log.`date +"%Y%m%d_%H%M%S"` -avP -e "ssh -T -o Compression=no -x" "/path/to/files/" "<user>@domain:/path/to/files/";

Basically it removed old logs, putting them into a nice tarball which it will delete periodically. Then it runs the backup script, creating a new log. Generally, I will log in peridocally and manually check the logs to make sure everything is working as it should. What I want to to do, simply, is to have it email the contents of the log every day, saving me the 30 seconds trouble of logging in and checking manually.

As I have a ‘proper’ mail server with SMTP/IMAP I want to us it to send the logs.

Installing and Configuring Packages:

sudo apt install mailutils ssmtp

Configure ssmtp by editing the main config file: /etc/ssmtp/ssmtp.conf. Comment out all the other lines so your configuration looks like this:

mailhub=mailserver.domain.com:587
UseSTARTTLS=YES
AuthUser=user@domain.com
AuthPass=password

You will need to have configured a mail user on your mail server. All users will send from the user@domain.com address. This isn’t a problem as the only mail I’m sending from this server are alerts and logs. In server environments where there are multiple users sending general mail this setup will not be appropriate.

Next, edit the revaliases file in the same directory. Add the details for the user who will be running the command to send email:

localuser:user@domain.com:mailserver.domain.com:587

That’s the configuration done!

Test sending an email with the following:

echo "this is a test" | mail -s "Test Email" email@your.address.com

Check the contents of the syslog:

:~$ tail -3 /var/log/syslog
Sep 26 08:47:21 servername sSMTP[23535]: Creating SSL connection to host
Sep 26 08:47:22 servername sSMTP[23535]: SSL connection using RSA_AES_128_CBC_SHA1
Sep 26 08:47:25 servername sSMTP[23535]: Sent mail for user@domain.com (221 2.0.0 Bye) uid=1000 username=localuser outbytes=4792

Success!

Automate sending the logs:

Change the crontab file with:

crontab -e

Add the email command to the end of the job that kills the process:

30 23 * * * /home/wargus/rsync.sh >/dev/null 2>&1
0 6 * * * killall rsync >/dev/null 2>&1; cat /var/log/rsync/log* | mail -s "Rsync Log for `date`" warren@warbel.net

Further Reading:

https://linux.die.net/man/8/ssmtp
https://www.nixtutor.com/linux/send-mail-with-gmail-and-ssmtp/
https://stackoverflow.com/questions/20318770/send-mail-from-linux-terminal-in-one-line
https://tecadmin.net/send-email-smtp-server-linux-command-line-ssmtp/

Configuring Powershield UPS on Linux and Integrating into Zabbix

Background:

Like many IT people in Perth, Australia , I buy my gear for the most part from PLE computers. And that includes their uninterruptible power supplies (UPS). The most reasonably priced desktop grade UPS’s are the Powershield Defender series. Of which I have two:

  • Power Shield Defender LCD 650VA UPS (requiring the blazer_usb driver)
  • Power Shield Defender LCD 1.2KVA UPS (requiring the usbhid-ups driver)

On Windows I would simply plug in the devices and install their drivers. On Linux however, nothing is that simple. This guide will work through connecting and confguring the UPSs on Linux. As it’s important to know that status of the battery and know when its time to replace them, I also want to be able to monitor my UPSs using my monitoring solution – Zabbix.

Install Network UPS Tools

To get started, install the Network UPS tools.

sudo apt install nut

Identify Your UPS

The 1.2KVA identifies itself as:

:~$ lsusb
...
Bus 001 Device 003: ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS
...

And the 650KVA reports as:

:~$ lsusb
...
Bus 004 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
...

Configure NUT

Edit /etc/ups.conf:

As they use different drivers, append the following to the end of the file, replacing the section in the brackets with your own name if you like:

[defender]
# use either blazer_usb or usbhid-ups depending on your UPS
driver = blazer_usb
port = auto
desc = "Add your description"

Edit /etc/nut.conf and change:

MODE=none

to:

MODE=standalone

Add users to the nut-monitor service. These users can change settings on the UPS or simply have read access on them. Edit the file /etc/nut/upsd.users. Un-comment and edit the lines:

[admin]
password = yourpassword
actions = SET
instcmds = ALL
...
 [upsmon]
 password = yourotherpassword
 upsmon master

Creating the admin account will allow you to test or send commands to the UPS. More on that later.

As the instructions say in the file itself, edit the /etc/upsmon.conf file next. It is worth reading the options and setting them to your desired state, pay particular attention to the MONITOR section. Append the following to your file:

MONITOR defender@localhost 1 upsmon yourotherpassword master

Start the service with and check that everything is working:

$ sudo service nut-server restart
$ sudo service nut-server status
● nut-server.service - LSB: Network UPS Tools initscript
 Loaded: loaded (/etc/init.d/nut-server; bad; vendor preset: enabled)
 Active: active (running) since Fri 2017-09-15 16:08:42 AWST; 4s ago
 Docs: man:systemd-sysv-generator(8)
 Process: 19871 ExecStop=/etc/init.d/nut-server stop (code=exited, status=0/SUCCESS)
 Process: 19878 ExecStart=/etc/init.d/nut-server start (code=exited, status=0/SUCCESS)
 Tasks: 2
 Memory: 2.4M
 CPU: 50ms
 CGroup: /system.slice/nut-server.service
 ├─19906 /lib/nut/usbhid-ups -a defender
 └─19908 /lib/nut/upsd

Sep 15 16:08:42 atlas systemd[1]: Starting LSB: Network UPS Tools initscript...
Sep 15 16:08:42 atlas nut-server[19878]: * Starting NUT - power devices information server and drivers
Sep 15 16:08:42 atlas usbhid-ups[19906]: Startup successful
Sep 15 16:08:42 atlas upsd[19907]: listening on 127.0.0.1 port 3493
Sep 15 16:08:42 atlas upsd[19907]: not listening on ::1 port 3493
Sep 15 16:08:42 atlas upsd[19907]: Connected to UPS [defender]: usbhid-ups-defender
Sep 15 16:08:42 atlas upsd[19908]: Startup successful
Sep 15 16:08:42 atlas nut-server[19878]: ...done.
Sep 15 16:08:42 atlas systemd[1]: Started LSB: Network UPS Tools initscript.

Testing and Configuring the UPS

Run the command below to get the current status of the UPS:

 $ sudo upsc defender@localhost

It will return a long list of values if it is successful.

Run a quick test of the battery with the admin account and check the progress:

$ sudo upscmd -u admin -p yourpassword defender test.battery.start.quick 
$ sudo upsc defender@localhost
ps.status: OL DISCHRG
ups.test.result: In progress
...
$ sudo upsc defender@localhost
ups.status: OL CHRG
ups.test.result: Done and passed

More commands for the blazer_usb UPS can be found here, the test command, at least, also works for the usbhid-ups driver too.

Having come this far you should have a basic UPS in a working configuration.

Configure Zabbix

download or clone the git repository to your computers with the UPS attached.

$ git clone https://github.com/delin/Zabbix-NUT-Template.git
$ cd Zabbix-NUT-Template

Copy the files to their proper location:

$ sudo cp -r sh/ /etc/zabbix/
$ sudo cp zabbix_agentd.d/userparameter_nut.conf /etc/zabbix/zabbix_agentd.conf.d/

Restart the Zabbix services both on the agent and server.

sudo service zabbix-agent restart
sudo service zabbix-server restart

On your desktop, download/clone the git repository. Log into Zabbix. Follow the instructions and create the value mapping.

Import the Zabbix template. In the usual way and link it to your servers.

If you feel like it, create a new screen to monitor your UPS.

And you’re done! No more guessing and hoping your UPS’s haven’t swapped to battery when you’re away from home.

Troubleshooting:

 

The Powershield UPS that uses the driver usbhid-ups has a habit of dropping out, with the error message that the data is stale. I attempted  a work around with a script with the following in /root/restart_service.sh:

#!/bin/sh
#Get the error state:
ErrorState=`upsc defender@localhost 2>&1|grep -v SSL|cut -b 1-5|tail -1`;
#If the error state is equal to "Error" then restart the service
if [ $ErrorState = "Error" ]
then
 service nut-server restart
 echo "Restarting nut-server" >> /var/log/syslog
fi

And edited the crontab for root with sudo crontab -e and add the following line:

* * * * * /bin/bash -l -c "/root/restart_service.sh; sleep 30 ; /root/restart_service.sh"

Unfortunately this did not resolve my issue! Eventually I played around with a few settings, ultimately arriving at adjusting the maxretry in ups.conf. Changing it to:

maxretry=5

I also adjusted the polling interval to 60 seconds.

Resources:

Big thanks to http://nitestick.net/nut-for-defender-1200/ whom I mostly followed to get this working.
Blazer USB documentation: http://networkupstools.org/docs/man/blazer_usb.html
Zabbix NUT templates: https://github.com/delin/Zabbix-NUT-Template
NUT documentation page, which helped me to narrow down the drivers I needed: http://networkupstools.org/stable-hcl.html
I also referenced: http://tedfelix.com/software/nut-network-ups-tools.html