Monitoring an off-grid system with Kauri

In 2016 I finished my Software Engineering degree at the University of Waikato with my fourth year honours project. The objective of my project was to create a system which would allow the owner of an off-grid house to monitor their energy system and access this information from anywhere.

The Kauri logo

What resulted is Kauri Energy Monitor, a cloud based system for monitoring a renewable energy system and I’ve recently gotten it to the point where it’s ready to release to the world!

In this post I’ll cover what Kauri can do, how you can start using it and I’ll discuss a few things the process has taught me.

Who would need this?

Some houses are fitted with renewable energy systems - this is a very broad term for various configurations of equipment which allow the occupants to generate their own electricity. Depending on the exact configuration, these may also have a form of energy storage or backup generation.

An off-grid system is a good example of a renewable energy system. These might have solar or wind generation, a bank of batteries for energy storage and a generator as a backup energy supply. On the complete opposite end of the spectrum, grid connected systems usually feed their excess generation back to the grid and consume energy from the grid when required.

A diagram showing a typical offgrid system. A typical offgrid system.

In all of these situations it can be difficult to figure out what your energy system is doing. Is it currently charging? How long will the battery last? How efficient is my generation?

Kauri answers these questions and allows users to do this from any internet connected device.

What can Kauri do?

Users of Kauri setup energy sensors at various points in their energy system. These sensors are hooked up to a computer running a piece of software called Kauri Bridge which sends readings to the Kauri server.

By analysing the energy flow data, Kauri shows an overview of all the energy flowing through the system. This tells the user the amount of energy being generated, consumed by the house or stored in the battery. Kauri also feeds this information into the B42SOC algorithm to calculate the state of charge of the system’s batteries (i.e.: battery level).

The summary page.
The Summary page in Kauri shows current energy flows and battery state.

Kauri determines the future state of the system using battery state information and energy flow patterns developed over time. This lets Kauri answer questions like, when will the battery be fully charged? When will the battery run out? Or what battery level will the system be at in 5 hours?

The future state page.
The Future State page in Kauri shows the state of the system into the future.

How can I use Kauri?

Getting a bridge and some sensors sorted

You’ll need to have some supported sensor devices setup in your energy system (the “What do I need?” page will help you figure out which sensors you’ll need). Once you get these hooked up to a computer which can run Kauri Bridge then you can setup your Kauri server.

The Smart Circuit 20 device exterior.
A Smart Circuit SC20 device - an AC sensor supported by Kauri Bridge.

Running your own Kauri server

Kauri is open source so you can run it on your own server for free! The getting started guide runs you through the process of setting up Kauri on your own server.

See the getting started guide.

Want a hosted option?

An image of a cloud.

Currently if you want to use Kauri you’ll need to be comfortable with hosting the Kauri server yourself. If you aren’t, then I’d love to offer this as a paid monthly service if there is enough interest.

Express your interest in a hosted option here!

Contributors wanted!

I’ve really enjoyed working on Kauri but I really want to get onto some new projects! If you want to improve the project, please feel free to submit a pull request! I’m happy to provide advice for further development.

Contribute on GitHub!

Want to learn more?

If you’d like to know more about how it all works, feel free to read my honours project report:

A PDF icon.

Read "Cloud Based Monitoring of a Renewable Energy System"

Or, watch a video of a slightly younger version of myself presenting my honours project:

What has this project taught me?

This whole project has taught me so much technically and it’s the biggest personal project I’ve ever embarked on, but there are a few specific things that I’ve learned that I want to discuss.

Give your project a name

Kauri didn’t really have a proper name until recently - it was usually just known as ‘my honours project’ or ‘Offgrid Monitoring’. Giving a project a name gets you thinking about the purpose and scope of the project which makes it easier to reason about its features.

For example, by calling it ‘Kauri Energy Monitor’ I decided that I would support as many configurations of renewable energy systems as possible so I shouldn’t prioritise features that only benefit users of off-grid systems.

Ask the question, will I really need that?

While getting Kauri ready for release I listed off loads of features that I thought would be useful. These things would be nice to implement but weren’t driven by any actual user requirement and wouldn’t be the biggest barriers to adoption by users.

The lesson here is to ensure any change you make provides enough value to be worth the cost of implementation. This isn’t easy for projects which you’re passionate about but it means you aren’t wasting time implementing something which no one needs.

Think about project handover

While I was first developing Kauri it had a single production instance that was kept up to date as the system developed. If I needed to play with some real data I just downloaded a database dump and applied it to my local database. I didn’t need documentation at that point because I had all the context in my head and I was the only developer. Some actions could only be completed via the API as it wasn’t worth the time building an interface to add a building if I only needed to add a building once.

However, after spending a few months without working on Kauri I experienced the cold introduction that any new developer would get. How do I get it running on my system? What if I want to get some data running locally without a real renewable energy system to test with?

I realised that I needed to:

  • Improve the configuration interface so that users didn’t need to use the API to configure Kauri.

  • Write some basic documentation - I covered how to setup an enviroment, some of the background to the project and added a getting started guide.

  • Provide a set of mock data to allow users to experiment without needing a renewable energy system.

  • Build tooling to populate Kauri with example data.

Always ask yourself, what would be the biggest pain points for continued development if I had to hand it over to someone else tomorrow?

Ten percent of my work time is dedicated to professional development through two ‘Hackdays’ a month. For my most recent Hackdays I wanted to combine old-school phone technology with ‘modern’ web technoligies. I came across a service called Plivo which lets you work with the phone network through a web API.

What is Plivo?

The Plivo logo

Plivo is a web service that gives us a web API to interact with the phone network. You rent a number with Plivo and pay a charge for each minute of calling or SMS message sent.

What can Plivo do?

There seems to be a lot of things you can do with Plivo, but the main ones are:

  • Sending and receiving SMS messages.
  • Receiving inbound calls and making outbound calls.
  • On a call you can:
    • Convert text to speech.
    • Play audio by providing a URL to an MP3 file.
    • Accept digits from callers (e.g.: “press 1 for sales” in phone menus).

Making something with Plivo!

My initial plan was to implement a game of Hangman over SMS, especially considering I’d written a game of Hangman back on my devtrain. However, I quickly discovered that Plivo didn’t support receiving SMS messages in New Zealand, only sending them. Next I thought I’d just make a phone call based Hangman game, but sadly Plivo doesn’t support speech recognition making it difficult to receive character inputs.

Instead, I ended up implementing a simple number guessing game. This was a great place to start because the logic for such a game is very simple which allowed me to focus on the phone integration side. The game chooses a random number between 1 and 100 and asks the caller to guess the number by entering it on their dialpad. The game gives feedback on whether the guess is correct or higher/lower than the actual number. If the guess is wrong the caller can keep guessing until they get it correct.

A number line

How do I work with Plivo for calling?

Something I was unsure about initially was how you’d actually handle a call. I’d assumed that after a call was picked up that you’d need to deal with the audio stream yourself, but in fact, Plivo handles the entire audio stream for you!

When a call comes in, Plivo hits your API which provides a set of instructions in XML. Depending on the actions you ask for, you may need to define a callback URL for Plivo to hit later with more data (e.g.: a URL for it to hit for instructions after receiving digits from the caller).

It takes a little while to think about phone calls in terms of API requests, but this approach means that you can scale your phone system just like you scale your normal web API’s.

Implementation time!

I used Node and Plivo’s Node SDK to build the hotline’s API. Plivo’s SDK handles all the XML instruction formatting for you, making implementation very simple. A good reference for this was Plivo’s Phone IVR guide which introduces all the Plivo functionality I used for the project.

The game is pretty simple, it consists of only two endpoints for Plivo to hit; one for when the call is initially made to welcome the caller (and prompt them to enter a guess) and the other as a callback for the caller’s guess.

A diagram showing how the phone, Plivo and hotline API communicate.

Welcome Route

When a call comes in, Plivo hits the / route with a GET request and the route returns the following XML to instruct Plivo:

	<Speak>Welcome to the number guessing hotline.</Speak>
	<GetDigits action="[server URL]/guess?number=[chosen random number]" method="POST" timeout="10" numDigits="2" retries="3">
		<Speak>What is your guess?</Speak>
		<Play>[server URL]/elevator_music.mp3</Play>
	<Speak>Huh? I didn't understand. Could you try again?</Speak>

This simply tells Plivo to speak a welcome message then allow the caller to enter some digits with a defined callback URL. Plivo will ask the caller what their guess is and play some music while the caller enters their digits.

If the GetDigits action fails (e.g.: the caller takes too long) then the wrong input message is spoken.

Guess checking route

When the caller has entered some digits, Plivo will hit the guess URL with a POST request. Plivo puts the entered digits in a “Digits” field in the request (if you’re testing this out for yourself in a tool like Postman use the x-www-form-urlencoded format of POST body).

In addition to this, you may have noticed in the XML above, the games chosen random number is a query string parameter in the callback URL. This is because I’m super lazy and I didn’t want to have to persist the chosen random number for a call in a database. Getting Plivo to pass this parameter around for us means we can easily scale up the number of app servers as required for the millions of simultaneous calls required by our booming number guessing hotline startup!

You can checkout the code for the guess route, but it yet again returns some XML for Plivo:

    <Speak>You guessed too [low/high]!</Speak>
    <GetDigits action="[server URL]/guess?number=[chosen random number]" method="POST" timeout="10" numDigits="2" retries="3">
        <Speak>What is your guess?</Speak>
        <Play>[server URL]/elevator_music.mp3</Play>
    <Speak>Huh? I didn't understand. Could you try again?</Speak>

Once again this makes use of GetDigits to prompt the caller for a further guess, making use of the same callback URL with the random number.

Or if the caller gets the number correct then it simply returns a simple <Speak> response. After this is spoken Plivo has no more instructions so will just hangup on the caller.

You can checkout all the code on GitHub at jordancrawfordnz/number-guessing-hotline

Setting the hotline up on Plivo

When a call comes in, Plivo needs to hit a public API endpoint that we provide. This means we need to host our API somewhere on the internet.

If you aren’t comfortable with SSH’ing into a server to setup your app, Heroku is a good option that takes care of all the server management stuff for you. If you want to save a bit of money and don’t mind running the app server yourself then DigitalOcean or Vultr will work great. All these options use hourly (or in the case of Heroku, secondly!) billing so it won’t cost you much to mess around with Plivo. This might also be a good candidate for Serverless given the simplicity of our application code.

Next, buy a number through Plivo then create an application with your server URL as the answer URL and assign your number to your application.

You should now be able to call up your number and enjoy the phone hotline built with less than 100 lines of JavaScript!

About a year ago, I posted two articles about setting up my Raspberry Pi as a home server and how I setup remote access to it from anywhere in the world without using portforwarding. I’ve had a few people stumble across my post, asking questions and sharing details about their projects with me! I thought I’d take the time to better address an area people seem to be having difficulty with, which is how to setup a tinc network between a cloud server and a computer on a home network.

What are we trying to achieve?

Lets say you want to access some content or service from a computer at your house from some other computer on the internet. But, like with most home network setups your computer isn’t reachable from the public internet. This is probably because your home network has a dynamic IP address and some other restrictions (like firewalls and NATs) that make connecting to it a challenge.

We can achieve this if the computer you want to connect from has a fixed public IP address. But how? Well, the computer at home can still make outgoing connections to the one with a fixed IP address. We can tunnel network traffic through this connection to link the machines together; this is called a Virtual Private Network (VPN).

In my situation, I wanted to get access to services on my Raspberry Pi at home from anywhere on the internet without needing to configure port forwarding. The first step in this was to establish a reliable VPN between my Pi and cloud server so that when I access an address like “” my cloud server can pass the traffic through the VPN link to the Pi at home.

tinc is an awesome open-source piece of software that we’ll use to setup this VPN link. My Raspberry Pi runs HypriotOS (but Raspbian is a good option too) and my cloud server is a Vultr cloud server running Ubuntu.

A diagram showing how the home server connects to the cloud server, which establishes a VPN connection between the two.

Installing tinc

The tinc logo

You’ll need tinc installed on both the home server and the cloud server.

Manual installation

Installing tinc will vary by operating system, so Google is your friend. But, if you’re feeling lucky, try the following commands:

sudo -s
apt-get update
apt-get install tinc

If you don’t have any luck, you could try compiling it from source.

Verify your installation works by running the command tincd --help. If this outputs a message explaining all the options of tinc then everything worked fine!

Your tinc config directory will be /etc/tinc by default.

With Docker

I personally use Docker to run tinc in a container for my personal setup as I’ve discussed in a previous post.

On the Linux x86 server I use: jenserat/tinc
On a Raspberry Pi I use: jordancrawford/rpi-tinc.

Make a directory somewhere on your system for your tinc config. For both the above Docker images, you should configure a volume mapping from your tinc config directory to /etc/tinc inside the tinc container.

Planning our configuration

We’re using tinc for quite a simple usecase, and as a result, the only things we need to plan out in advance are the names and IP addresses of each of our servers on the tinc network.

I’ll call my cloud server with the public fixed IP address cloud, and my home server behind a home network home.

For the IP addresses, we want an IP address range that won’t clash with my home’s local network. I find the 10.0.0.XXX works pretty well, so I’ll configure the IP’s as below.

Computer Name tinc IP

This means that from the cloud server, accessing will route us to the home server on the other side of the tinc tunnel, and vice versa.

Configuring cloud

A cloud server.

Lets start by configuring the cloud server.

You can find my full example configuration for the cloud server on GitHub.

Config file

In your tinc config directory, open your favorite text editor and make a tinc.conf file.

Lets fill this in with the following:

Name = cloud
AddressFamily = ipv4
Interface = tun0

This just says that the name of our server on tinc is “cloud”, that it uses IPv4 and uses a network interface called tun0.

Up and down scripts

Next we need to setup a tinc-up and tinc-down script. These are what tinc uses to attach itself to it’s network interface. Make a tinc-up file containing the following:

ifconfig $INTERFACE netmask

Then, make a tinc-down file containing the following:

ifconfig $INTERFACE down

Lets make these scripts executable by running the command: chmod +x tinc-*

Public and private keys

Next, we need to generate the public / private key pair for cloud.

This will create an entry for cloud in the hosts directory, but we need to make a hosts directory first! Run: mkdir hosts to make this directory.

Generate the keys with tincd -c . -K and hit enter when it asks where to put your private and public keys.

Here’s what this command looks like for me:

[email protected]:~# tincd -c . -K
Generating 4096 bits keys:
......................................................................................++ p
..............................................++ q
Please enter a file to save private RSA key to [/root/tinc-example-cloud/rsa_key.priv]:
Please enter a file to save public RSA key to [/root/tinc-example-cloud/hosts/cloud]:

You’ll now have an rsa_key.priv file with this servers private key and a cloud file in the hosts directory with it’s public key.

Now we just need to add some additional information to our hosts/cloud file. Edit this file by adding an Address and Subnet so it looks like the following:

Address = [the hostname or IP address of the cloud server. e.g.:]
Subnet =

[the generated public key for the cloud server]

Configuring home

A house.

Now, login to your home server and we’ll set it up in a similar way.

You can find my full example configuration for the home server on GitHub.

Config file

Make a tinc.conf file with the following contents:

Name = home
AddressFamily = ipv4
Interface = tun0
ConnectTo = cloud

Similar to cloud, this defines the name, address type and interface for tinc. However, in addition, this also tells it to make a connection to the server called cloud.

Up and down scripts

Like last time we’ll make a the tinc-up and tinc-down scripts.

Your tinc-up script would be:

ifconfig $INTERFACE netmask

And your tinc-down would be the same with:

ifconfig $INTERFACE down

Once again, make these scripts executable by running chmod +x tinc-*.

Public / private keys

Make the hosts directory with mkdir hosts, generate the keys with tincd -c . -K then add the Subnet to your hosts/home file so it looks like the following:

Subnet =

[the generated public key for the home server]

Telling the servers about each other

The final step is to tell the servers about each other. The information that each server needs about the other is all contained in the hosts directory. In these host files the Subnet tells us which IP each server should have once connected while the public key is used to secure the connection and verify we’re actually communicating with the server we expect!

This step is pretty easy to do, we just need to make the hosts directorys the same across both systems! Just copy your hosts/cloud file from cloud to the hosts on home, then copy your hosts/home file from home to hosts on cloud.

Testing everything works

Finally, the moment of truth! Run tincd on both of your servers.

You might benefit from initially running in no-detach mode with tincd -D. If you hit Ctrl + c this will increase the log level of the server so you can see all the messages between the two servers. To exit, hit Ctrl + \.

If everything worked correctly you should be able to access home from on cloud. You might want to try initiate an SSH connection or access a web service from home, e.g.: ssh [you]@ or wget

Nice one! You’ve now setup a VPN between a cloud server and your home server behind your restricted home network. Next up you could try setting up an HTTP proxy on your cloud server to access your home services from anywhere.

In November of last year I finished my Software Engineering degree and moved to Wellington to start work at Flux Federation. Flux builds software that runs energy companies like Powershop - an energy retailer in New Zealand, Australia and the United Kingdom. I work as part of the team that builds the Rails app that brings the power to the people.

The Flux logo.

My first few months were spent in training on what Flux calls the Dev Train.

The Dev Train

I had never done anything with Ruby before, so it started with building a command line game of Hangman in Ruby, followed by a Rails version.

A screenshot of my command line Hangman game.
My first Ruby program, command line Hangman!

While building these myself and the other Dev Trainees had code review sessions with Dev Train mentors. These sessions taught us how to best design software systems and write clean, maintainable code.

Next, I made a Rails version of the Ticket to Ride boardgame. I approached this using test driven development (TDD). This basically means you write some tests for new features, then write the code that makes the tests pass, then refactor the code and tests.


For me, one of the biggest take aways of the Dev Train was that testing is important! Yes, tests can take a lot of time to write, but especially in a large, complex codebase they become absolutely essential to ensure everything is still ticking along as it should.

For Ticket to Ride I wrote unit tests in RSpec and integration tests with Cucumber.


The RSpec logo.

RSpec allowed me to test the public methods in my controllers, models and services. I found the ability to mock out dependant services incredibly useful. This ensures your specs are only concerned with the code under test, rather than all it’s dependencies. Mocking means that rather than spending time setting up situations where a depdendant service will return a particular result, you can just mock it to return a particular result and make assertions about how the code under test should respond.

For example, I have a ClaimRouteController which calls a ClaimRoute service. In my specs for ClaimRouteController I mock out ClaimRoute and assert that when ClaimRoute returns errors that these errors will be added to the flash to be displayed to the user. This means I don’t have to do the work of setting up a situation where ClaimRoute will fail when all I care about is that the controller handles errors appropriately.


The Cucumber logo.

Cucumber allows you to automate the process of a user testing out all the features of your application in a browser. A Cucumber feature consists of a series of steps to run the test.

A Cucumber feature is pretty easy to read. For example, as below, the Claim Route feature sets up a game with some train pieces and train cars, then proceeds through the steps to claim a route between two cities. This feature is successful if after doing this the list of routes shows this route as being claimed by the player.

Each of these steps has a corresponding step definiton. Step definitions are the code behind the scenarios that executes the step. You can use regular expressions in the names of steps to pass parameters to step definitions, keeping steps sounding natural and reusable.

My Ticket to Ride Implementation

It’s fair to say I went overboard on testing my Ticket to Ride game. Despite having ~250 RSpec examples and a further 12 Cucumber scenarios, my game only got as far as allowing users to:

  • Setup a game.
  • Draw additional train cars.
  • Spend train cars and train pieces to claim a route.

Either way, you can checkout the code on GitHub and some screenshots of the app below.

Game Board

A screenshot of my Rails Ticket to Ride game's board.

Claiming a route

A screenshot of my Rails Ticket to Ride game's claim a route page.

An Awesome Learning Culture

I’m now finished the Dev Train and onto ‘real work’ now, but that doesn’t mean the learning stops! With the tech industry moving so fast there is always something new to learn. Flux has an awesome learning culture; each week a few members of the crew give a talk about something that interests them, and 10% of our time goes towards learning new things with Hackdays projects.

I’m really enjoying my time at Flux and I’m looking forward to seeing what’s in store for the next few years!

Edit, 29 Oct 2017: The software dev arm of Powershop split out into a company called Flux Federation a few months ago. It’s still the same awesome place to work so I’ve replaced references to Powershop with Flux.

Last weekend I wanted to see the latest movie trailers. Of course, there are plenty of websites out there for this, however, none of them meet the excitement of watching movie trailers before a film at the cinemas.

Seats at a cinema.

So thats what I set out to build. First, I needed to find a source for movie information. I don’t want to populate this information manually, so I found TheMovieDB has an API. This site is great because there’s a whole community of people dedicated to keeping this information up to date.

The TheMovieDB logo.

I linked in with their API and pulled down details of upcoming and now showing movies. TheMovieDB’s API lets me get a list of movies with one request, however it requires another request to fetch the videos (including trailers) for each movie. TheMovieDB has an API limit of 40 requests every 10 seconds, and if my site got popular it could exceed this and I don’t want to give out my TMDB key to any visitor!

Instead, I setup my VPS with a nightly job that fetches data from TheMovieDB and saves it to an Amazon S3 bucket. This job delays it’s requests to avoid the TMDB API limit and keeps only the movie ID and YouTube trailer ID’s. This method saves bandwidth and time, as visitors to only need to make one request to fetch all the data they need! As the entire site is purely static, everything can be hosted on Amazon S3 for simplicity and scalability.

Next, I used the YouTube Player API to embed trailers. This let me hook into events like when the video has finished so that I can start the next one playing. I was suprised at the level of control they give embedders, allowing me to prevent annotations, video controls and other distractions for a pure video experience.

The interface while playing a trailer.

A core feature of the site is that it only plays trailers you haven’t seen. To keep things simple, I used local storage in the browser to store the list of movies the user has seen. When the user watches all the available trailers, they can clear out their list of seen movies to start over!

The interface when a user has watched all available trailers and has the option to start over.

This simple project took longer than I expected but I’m pleased with the result, and it’s where I’ll go to get my trailers from now on!

Check it out at!