Building a budget home server with the ODROID HC2!

This post is part of a series about my journey towards a low cost and low power home server.

Other articles:


When I first started out on my budget home server journey back in 2016, the most logical choice was the Raspberry Pi 2 micro-computer. This has served me well, but my needs for a server have changed and the Pi just wasn’t keeping up. I want a quicker server but with the small size and low power consumption that made the Pi great!

What’s wrong with a Raspberry Pi as a NAS?

The speed I was getting on my Pi was disapointing.

As I breifly mentioned at the end of my first post, the Raspberry Pi isn’t a very quick as a NAS server.

There are several factors at play here, most notably the fact that the Pi 2 only has USB 2.0 ports and 100mbps ethernet which both cap the max transfer speeds. In addition, the ethernet and USB ports share bandwidth to the CPU so heavy use of these interfaces results in further speed reductions. All this in combination with a pretty limited CPU, results in poor performance as a NAS!

The ODROID HC2

A view of the ODRIOD unit from the side.

The ODROID Home Cloud 2 is a single board computer which houses your hard drive and sports some powerful chips on it’s tiny circuit board. Seriously, this thing is tiny - smaller than a Raspberry Pi! It’s got an octocore ARM processor, 2GB RAM, a built in SATA port and gigabit ethernet. The Home Cloud doesn’t even have any form of video output, it’s simply not needed!

Because it houses the hard drive, the Home Cloud makes a very tidy home server setup and only takes up a single power socket. The only downside with the hardware is that the single USB port only supports USB 2.0, but this is a small complaint given I only use this port for taking backups of the internal hard drive.

A view of the ODRIOD unit from the top.

The unit is cheap, coming in at about $65 USD for the base unit, power adapter and top case - just bring your own hard drive and SD card.

If you have a 2.5” drive, theres also the ODROID Home Cloud 1 which has the same specs but installed in a different case. I happen to own a 3.5” hard drive, but I absolutely love the idea of an even smaller NAS. The Home Cloud’s are also stackable - let me know if you have any ideas of a usecase for this!

Setting up the system

Lets get the basics of a home server configured before getting any applications going.

OpenMediaVault

The OpenMediaVault admin home page.

Just like a Raspberry Pi, the Home Cloud runs it’s operating system off the SD card. There are a variety of operating system images to choose from. I chose the open-source NAS operating system called OpenMediaVault for mine!

OpenMediaVault has an easy to use web interface for configuring your system, comes with all the file servers you need, and has a good plugin system to make installing apps really easy. OpenMediaVault handles much of the manual configuration I had to do on my Pi NAS which meant I could get a working system going sooner!

I went through the following steps to get OpenMediaVault up and running:

  1. Download the OpenMediaVault image for ODROID units.
  2. Clone this image to your SD card, I like to use Etcher for this.
  3. Boot it up and find your device’s IP address on the network.
  4. Type in the IP address of the NAS into your browser and login with the default credentials from the installation guide (under “First time use”).
  5. Change your web-administrator password under “General Settings” > “Web Administration”.
  6. Run updates by clicking “Check” then “Upgrade” in “Update Management”.
  7. Setup your user under “User”. We need SSH access so make sure to tick the boxes for the “sudo” and “ssh” groups.
  8. Login to SSH as your new user and expand your SD card, I personally found the steps outlined in “Resizing the partition online using fdisk” of this guide did the trick.

Docker

The Docker logo.

OpenMediaVault has plenty of plugins which we can use, but for anything else we can use Docker. Docker containerises all your apps which avoids all the setup and depedendeny management mess of installing new software. You can learn more about Docker in my original NAS post).

To get Docker setup:

  1. Under “OMV-Extras”, double click on “Docker” (not “Docker CE”), click “Enable” then “Save”.
  2. Over in “Plugins”, click “Check”. Once this completes, search the plugin list for “docker” and install the “openmediavault-docker-gui” package..
  3. This will give you a handy “Docker” menu to manage your images and containers.
  4. Optionally, if you want Docker Compose to use files to manage your Docker containers, install Docker Compose using pip.

File Access

OpenMediaVault makes it easy to access your files over the network!

Setting up shared folders

Firstly, lets configure the hard dirve to spin down automatically under the “Physical Disks” menu, personally I use the “Intermediate power usage with standby” setting.

Next, configure your partition to mount automatically. Select your drive under “File Systems” and click “Mount”.

The Shared Folders section in OpenMediaVault.

Now lets configure some shared folders! Pop over to the “Shared” folders menu and click “Add”. Give your file shares a name, set the device and path then click “Save”. You need to set your user privileges for each folder by clicking “Privileges” and giving each user either read/write, read only or no access.

Configuring file servers

A file sharing icon.

Lastly, you just need to configure each of your file servers. These are all configured in a pretty similar way - just configure some shares in the “Shares” and flip the “Enable” switch under the “Settings” tab.

OpenMediaVault supports the three common file servers for NAS’s so you get to choose which ones you want to setup!

Apple Filing (AFP)

While no longer the default file sharing server for macOS, AFP is the traditional Mac file server and can only be used by macOS. I personally use a Mac and found AFP to work the fastest for me.

SMB/CIFS

SMB/CIFS is the most widely supported file server with built in support on Windows and macOS in addition to packages like Samba which enable support on Linux systems. I’d highly recommend setting up SMB/CIFS sharing on your NAS even if you don’t intend to use it as your primary file server.

NFS

NFS stands for Network File System. It’s pretty old school and is pretty common for Linux systems. It’s different from the modern alternatives in that it doesn’t have any authentication by default and expects a user’s ID’s to match across the server and client systems. NFS works great but as it requires a few more considerations I’d only configure it if you really know what you’re doing.

Remote Access

A cloud icon.

Another key part of my NAS server infastructure is to enable remote access from anywhere in the world. I went a bit far with my remote access requirements, so I like to be able to access my NAS from anywhere in the world no matter what internet connection I’m using at home.

I’ve posted about how I do this in the past, check out how I got access to my home services from anywhere without port forwarding.

Local Access Convinence

192.168.1.?

This was another overkill requirement of mine, but when I access my NAS locally I don’t want to have to remember it’s IP address or the port numbers of the services it runs.

This is something else I’ve posted about in the past so check out how I get easy access to my home server on a local network.

USB File Backup

One of the downsides to having only one SATA port is that you lose out on the multi-drive data redundancy provided by most commerical NAS setups. While data redundancy is nice to ensure continuity in the event of a drive failure, I’m only using my NAS at home so I can manage without immediate access to my data in the event of a drive failure.

However, I don’t want to lose all my files completely so I have a second hard drive which is kept offline most of the time as a cold backup. I sync this at regular intervals so in the event of a drive failure I’ll just buy a new drive and restore from this backup.

The USB Backup section in OpenMediaVault.

I use the OpenMediaVault “USB Backup” plugin to automatically sync to my backup drive whenever it’s connected. This is based on rsync but has an easy web-interface to configure the backup mode, shared folder and backup device.

Configuring applications

Now that all the basics are setup, it’s time to configure some applications on the system!

Plex

The Plex icon.

Plex is a server for your personal media collection. It collects brilliant metadata about your content and allows it to be played pretty much anywhere with the web, desktop, TV and mobile apps.

For remote access to Plex I just use Plex’s Relay feature, but to get around the bandwith limitations you could also run it through my custom remote access solution.

The OpenMediaVault plugin for Plex is very simple, all I need to do is select a physical disk for it to store the Plex data files on.

Metadata storage

The OpenMediaVault plugin for Plex lets you configure the drive to store the metadata on but it won’t let you use the boot volume.

I figured that it made sense to store all the Plex metadata and such on the SD card as it didn’t need to spin up and SD cards are better suited to random reads like database and metadata accesses.

I went through a lot of effort trying to partition my SD card in order to store my metadata here, only to find that the benefit was negligible (and in fact, with the SD card I was using, significantly worse!). So, a word of wisdom, just don’t bother and store the Plex database on the hard drive!

Deluge

The Deluge icon.

Deluge is a torrent client with a web-interface. This allows you to download torrents directly on your NAS. I used the OpenMediaVault plugin for this and it works great!

Time Machine Backup

The Time Machine icon.

Time Machine is a feature built into macOS which backs up your computer every hour with file snapshots. Generally you’ll need an Apple Time Capsule to do this over the network, but OpenMediaVault’s Apple Filing server lets you enable Time Machine support for a network share! All you need to do is configure a share and tick “Time Machine support”.

This is completely optional, but I also configured a quota for this volume. This is a pretty awesome feature of OpenMediaVault where I can configure the Time Machine volume with an artificial capacity in order to prevent backups from filling up my entire drive. Set the quota option in the share (I used about 500GB for mine) and Time Machine will start clearing out old backups when it hits this point.

Cloud Storage Backup

The Google Drive icon.

I store a lot of my commonly accessed files in my Google Drive account. I have so many files there that I can’t sync them all with my computer so I use the Google Drive File Stream app to access everything else via the internet as needed. However, if my internet goes out at home or my Drive account is somehow compromised then I still want to have a copy of these files.

I use the rclone tool to do a one-way backup of my cloud storage to my NAS. I run this using Docker and the tloxipeuhca/rpi-rclone image.

I use OpenMediaVault’s built in “Scheduled Jobs” mechanisim to run this job once a week. To ensure I’m alerted if this backup ever stops working, I’ve configured healthchecks.io to notify me if my cloud backup doesn’t run once a week.

The scheduled job uses a command like to run the backup and log the successful run: docker start --attach [docker container name] && curl [healthcheck link]

How does it compare to the Pi?

Now for the moment of truth, how much better is my latest generation of NAS in comparison to the Raspberry Pi? Well I did some not-so-scientific tests. The first of which is another run of Blackmagic Disk Speed Test, followed by a real-world transfer of a 3.85GB file.

The Disk Speed test and file copy results compared between the Raspberry Pi and ODROID servers.

The results show that the ODROID unit is significantly faster than the Pi! I can’t believe it took over 40 minutes to copy a 3.85GB file to the Pi and back - the ODROID managed this task in under 2 minutes!

All the best for your projects! I’d love to hear how your projects go in the comments 😀

Monitoring an off-grid system with Kauri

In 2016 I finished my Software Engineering degree at the University of Waikato with my fourth year honours project. The objective of my project was to create a system which would allow the owner of an off-grid house to monitor their energy system and access this information from anywhere.

The Kauri logo

What resulted is Kauri Energy Monitor, a cloud based system for monitoring a renewable energy system and I’ve recently gotten it to the point where it’s ready to release to the world!

In this post I’ll cover what Kauri can do, how you can start using it and I’ll discuss a few things the process has taught me.

Who would need this?

Some houses are fitted with renewable energy systems - this is a very broad term for various configurations of equipment which allow the occupants to generate their own electricity. Depending on the exact configuration, these may also have a form of energy storage or backup generation.

An off-grid system is a good example of a renewable energy system. These might have solar or wind generation, a bank of batteries for energy storage and a generator as a backup energy supply. On the complete opposite end of the spectrum, grid connected systems usually feed their excess generation back to the grid and consume energy from the grid when required.

A diagram showing a typical offgrid system. A typical offgrid system.

In all of these situations it can be difficult to figure out what your energy system is doing. Is it currently charging? How long will the battery last? How efficient is my generation?

Kauri answers these questions and allows users to do this from any internet connected device.

What can Kauri do?

Users of Kauri setup energy sensors at various points in their energy system. These sensors are hooked up to a computer running a piece of software called Kauri Bridge which sends readings to the Kauri server.

By analysing the energy flow data, Kauri shows an overview of all the energy flowing through the system. This tells the user the amount of energy being generated, consumed by the house or stored in the battery. Kauri also feeds this information into the B42SOC algorithm to calculate the state of charge of the system’s batteries (i.e.: battery level).

The summary page.
The Summary page in Kauri shows current energy flows and battery state.

Kauri determines the future state of the system using battery state information and energy flow patterns developed over time. This lets Kauri answer questions like, when will the battery be fully charged? When will the battery run out? Or what battery level will the system be at in 5 hours?

The future state page.
The Future State page in Kauri shows the state of the system into the future.

How can I use Kauri?

Getting a bridge and some sensors sorted

You’ll need to have some supported sensor devices setup in your energy system (the “What do I need?” page will help you figure out which sensors you’ll need). Once you get these hooked up to a computer which can run Kauri Bridge then you can setup your Kauri server.

The Smart Circuit 20 device exterior.
A Smart Circuit SC20 device - an AC sensor supported by Kauri Bridge.

Running your own Kauri server

Kauri is open source so you can run it on your own server for free! The getting started guide runs you through the process of setting up Kauri on your own server.

See the getting started guide.

Want a hosted option?

An image of a cloud.

Currently if you want to use Kauri you’ll need to be comfortable with hosting the Kauri server yourself. If you aren’t, then I’d love to offer this as a paid monthly service if there is enough interest.

Express your interest in a hosted option here!

Contributors wanted!

I’ve really enjoyed working on Kauri but I really want to get onto some new projects! If you want to improve the project, please feel free to submit a pull request! I’m happy to provide advice for further development.

Contribute on GitHub!

Want to learn more?

If you’d like to know more about how it all works, feel free to read my honours project report:

A PDF icon.

Read "Cloud Based Monitoring of a Renewable Energy System"

Or, watch a video of a slightly younger version of myself presenting my honours project:

What has this project taught me?

This whole project has taught me so much technically and it’s the biggest personal project I’ve ever embarked on, but there are a few specific things that I’ve learned that I want to discuss.

Give your project a name

Kauri didn’t really have a proper name until recently - it was usually just known as ‘my honours project’ or ‘Offgrid Monitoring’. Giving a project a name gets you thinking about the purpose and scope of the project which makes it easier to reason about its features.

For example, by calling it ‘Kauri Energy Monitor’ I decided that I would support as many configurations of renewable energy systems as possible so I shouldn’t prioritise features that only benefit users of off-grid systems.

Ask the question, will I really need that?

While getting Kauri ready for release I listed off loads of features that I thought would be useful. These things would be nice to implement but weren’t driven by any actual user requirement and wouldn’t be the biggest barriers to adoption by users.

The lesson here is to ensure any change you make provides enough value to be worth the cost of implementation. This isn’t easy for projects which you’re passionate about but it means you aren’t wasting time implementing something which no one needs.

Think about project handover

While I was first developing Kauri it had a single production instance that was kept up to date as the system developed. If I needed to play with some real data I just downloaded a database dump and applied it to my local database. I didn’t need documentation at that point because I had all the context in my head and I was the only developer. Some actions could only be completed via the API as it wasn’t worth the time building an interface to add a building if I only needed to add a building once.

However, after spending a few months without working on Kauri I experienced the cold introduction that any new developer would get. How do I get it running on my system? What if I want to get some data running locally without a real renewable energy system to test with?

I realised that I needed to:

  • Improve the configuration interface so that users didn’t need to use the API to configure Kauri.

  • Write some basic documentation - I covered how to setup an enviroment, some of the background to the project and added a getting started guide.

  • Provide a set of mock data to allow users to experiment without needing a renewable energy system.

  • Build tooling to populate Kauri with example data.

Always ask yourself, what would be the biggest pain points for continued development if I had to hand it over to someone else tomorrow?

Ten percent of my work time is dedicated to professional development through two ‘Hackdays’ a month. For my most recent Hackdays I wanted to combine old-school phone technology with ‘modern’ web technoligies. I came across a service called Plivo which lets you work with the phone network through a web API.

What is Plivo?

The Plivo logo

Plivo is a web service that gives us a web API to interact with the phone network. You rent a number with Plivo and pay a charge for each minute of calling or SMS message sent.

What can Plivo do?

There seems to be a lot of things you can do with Plivo, but the main ones are:

  • Sending and receiving SMS messages.
  • Receiving inbound calls and making outbound calls.
  • On a call you can:
    • Convert text to speech.
    • Play audio by providing a URL to an MP3 file.
    • Accept digits from callers (e.g.: “press 1 for sales” in phone menus).

Making something with Plivo!

My initial plan was to implement a game of Hangman over SMS, especially considering I’d written a game of Hangman back on my devtrain. However, I quickly discovered that Plivo didn’t support receiving SMS messages in New Zealand, only sending them. Next I thought I’d just make a phone call based Hangman game, but sadly Plivo doesn’t support speech recognition making it difficult to receive character inputs.

Instead, I ended up implementing a simple number guessing game. This was a great place to start because the logic for such a game is very simple which allowed me to focus on the phone integration side. The game chooses a random number between 1 and 100 and asks the caller to guess the number by entering it on their dialpad. The game gives feedback on whether the guess is correct or higher/lower than the actual number. If the guess is wrong the caller can keep guessing until they get it correct.

A number line

How do I work with Plivo for calling?

Something I was unsure about initially was how you’d actually handle a call. I’d assumed that after a call was picked up that you’d need to deal with the audio stream yourself, but in fact, Plivo handles the entire audio stream for you!

When a call comes in, Plivo hits your API which provides a set of instructions in XML. Depending on the actions you ask for, you may need to define a callback URL for Plivo to hit later with more data (e.g.: a URL for it to hit for instructions after receiving digits from the caller).

It takes a little while to think about phone calls in terms of API requests, but this approach means that you can scale your phone system just like you scale your normal web API’s.

Implementation time!

I used Node and Plivo’s Node SDK to build the hotline’s API. Plivo’s SDK handles all the XML instruction formatting for you, making implementation very simple. A good reference for this was Plivo’s Phone IVR guide which introduces all the Plivo functionality I used for the project.

The game is pretty simple, it consists of only two endpoints for Plivo to hit; one for when the call is initially made to welcome the caller (and prompt them to enter a guess) and the other as a callback for the caller’s guess.

A diagram showing how the phone, Plivo and hotline API communicate.

Welcome Route

When a call comes in, Plivo hits the / route with a GET request and the route returns the following XML to instruct Plivo:

<Response>
	<Speak>Welcome to the number guessing hotline.</Speak>
	<GetDigits action="[server URL]/guess?number=[chosen random number]" method="POST" timeout="10" numDigits="2" retries="3">
		<Speak>What is your guess?</Speak>
		<Play>[server URL]/elevator_music.mp3</Play>
	</GetDigits>
	<Speak>Huh? I didn't understand. Could you try again?</Speak>
</Response>

This simply tells Plivo to speak a welcome message then allow the caller to enter some digits with a defined callback URL. Plivo will ask the caller what their guess is and play some music while the caller enters their digits.

If the GetDigits action fails (e.g.: the caller takes too long) then the wrong input message is spoken.

Guess checking route

When the caller has entered some digits, Plivo will hit the guess URL with a POST request. Plivo puts the entered digits in a “Digits” field in the request (if you’re testing this out for yourself in a tool like Postman use the x-www-form-urlencoded format of POST body).

In addition to this, you may have noticed in the XML above, the games chosen random number is a query string parameter in the callback URL. This is because I’m super lazy and I didn’t want to have to persist the chosen random number for a call in a database. Getting Plivo to pass this parameter around for us means we can easily scale up the number of app servers as required for the millions of simultaneous calls required by our booming number guessing hotline startup!

You can checkout the code for the guess route, but it yet again returns some XML for Plivo:

<Response>
    <Speak>You guessed too [low/high]!</Speak>
    <GetDigits action="[server URL]/guess?number=[chosen random number]" method="POST" timeout="10" numDigits="2" retries="3">
        <Speak>What is your guess?</Speak>
        <Play>[server URL]/elevator_music.mp3</Play>
    </GetDigits>
    <Speak>Huh? I didn't understand. Could you try again?</Speak>
</Response>

Once again this makes use of GetDigits to prompt the caller for a further guess, making use of the same callback URL with the random number.

Or if the caller gets the number correct then it simply returns a simple <Speak> response. After this is spoken Plivo has no more instructions so will just hangup on the caller.

You can checkout all the code on GitHub at jordancrawfordnz/number-guessing-hotline

Setting the hotline up on Plivo

When a call comes in, Plivo needs to hit a public API endpoint that we provide. This means we need to host our API somewhere on the internet.

If you aren’t comfortable with SSH’ing into a server to setup your app, Heroku is a good option that takes care of all the server management stuff for you. If you want to save a bit of money and don’t mind running the app server yourself then DigitalOcean or Vultr will work great. All these options use hourly (or in the case of Heroku, secondly!) billing so it won’t cost you much to mess around with Plivo. This might also be a good candidate for Serverless given the simplicity of our application code.

Next, buy a number through Plivo then create an application with your server URL as the answer URL and assign your number to your application.

You should now be able to call up your number and enjoy the phone hotline built with less than 100 lines of JavaScript!

About a year ago, I posted two articles about setting up my Raspberry Pi as a home server and how I setup remote access to it from anywhere in the world without using portforwarding. I’ve had a few people stumble across my post, asking questions and sharing details about their projects with me! I thought I’d take the time to better address an area people seem to be having difficulty with, which is how to setup a tinc network between a cloud server and a computer on a home network.

What are we trying to achieve?

Lets say you want to access some content or service from a computer at your house from some other computer on the internet. But, like with most home network setups your computer isn’t reachable from the public internet. This is probably because your home network has a dynamic IP address and some other restrictions (like firewalls and NATs) that make connecting to it a challenge.

We can achieve this if the computer you want to connect from has a fixed public IP address. But how? Well, the computer at home can still make outgoing connections to the one with a fixed IP address. We can tunnel network traffic through this connection to link the machines together; this is called a Virtual Private Network (VPN).

In my situation, I wanted to get access to services on my Raspberry Pi at home from anywhere on the internet without needing to configure port forwarding. The first step in this was to establish a reliable VPN between my Pi and cloud server so that when I access an address like “plex.jordancrawford.kiwi” my cloud server can pass the traffic through the VPN link to the Pi at home.

tinc is an awesome open-source piece of software that we’ll use to setup this VPN link. My Raspberry Pi runs HypriotOS (but Raspbian is a good option too) and my cloud server is a Vultr cloud server running Ubuntu.

A diagram showing how the home server connects to the cloud server, which establishes a VPN connection between the two.

Installing tinc

The tinc logo

You’ll need tinc installed on both the home server and the cloud server.

Manual installation

Installing tinc will vary by operating system, so Google is your friend. But, if you’re feeling lucky, try the following commands:

sudo -s
apt-get update
apt-get install tinc

If you don’t have any luck, you could try compiling it from source.

Verify your installation works by running the command tincd --help. If this outputs a message explaining all the options of tinc then everything worked fine!

Your tinc config directory will be /etc/tinc by default.

With Docker

I personally use Docker to run tinc in a container for my personal setup as I’ve discussed in a previous post.

On the Linux x86 server I use: jenserat/tinc
On a Raspberry Pi I use: jordancrawford/rpi-tinc.

Make a directory somewhere on your system for your tinc config. For both the above Docker images, you should configure a volume mapping from your tinc config directory to /etc/tinc inside the tinc container.

Planning our configuration

We’re using tinc for quite a simple usecase, and as a result, the only things we need to plan out in advance are the names and IP addresses of each of our servers on the tinc network.

I’ll call my cloud server with the public fixed IP address cloud, and my home server behind a home network home.

For the IP addresses, we want an IP address range that won’t clash with my home’s local network. I find the 10.0.0.XXX works pretty well, so I’ll configure the IP’s as below.

Computer Name tinc IP
cloud 10.0.0.1
home 10.0.0.2

This means that from the cloud server, accessing 10.0.0.2 will route us to the home server on the other side of the tinc tunnel, and vice versa.

Configuring cloud

A cloud server.

Lets start by configuring the cloud server.

You can find my full example configuration for the cloud server on GitHub.

Config file

In your tinc config directory, open your favorite text editor and make a tinc.conf file.

Lets fill this in with the following:

Name = cloud
AddressFamily = ipv4
Interface = tun0

This just says that the name of our server on tinc is “cloud”, that it uses IPv4 and uses a network interface called tun0.

Up and down scripts

Next we need to setup a tinc-up and tinc-down script. These are what tinc uses to attach itself to it’s network interface. Make a tinc-up file containing the following:

ifconfig $INTERFACE 10.0.0.1 netmask 255.255.255.0

Then, make a tinc-down file containing the following:

ifconfig $INTERFACE down

Lets make these scripts executable by running the command: chmod +x tinc-*

Public and private keys

Next, we need to generate the public / private key pair for cloud.

This will create an entry for cloud in the hosts directory, but we need to make a hosts directory first! Run: mkdir hosts to make this directory.

Generate the keys with tincd -c . -K and hit enter when it asks where to put your private and public keys.

Here’s what this command looks like for me:

[email protected]:~# tincd -c . -K
Generating 4096 bits keys:
......................................................................................++ p
..............................................++ q
Done.
Please enter a file to save private RSA key to [/root/tinc-example-cloud/rsa_key.priv]:
Please enter a file to save public RSA key to [/root/tinc-example-cloud/hosts/cloud]:

You’ll now have an rsa_key.priv file with this servers private key and a cloud file in the hosts directory with it’s public key.

Now we just need to add some additional information to our hosts/cloud file. Edit this file by adding an Address and Subnet so it looks like the following:

Address = [the hostname or IP address of the cloud server. e.g.: server.mydomain.com]
Subnet = 10.0.0.1/32

-----BEGIN RSA PUBLIC KEY-----
[the generated public key for the cloud server]
-----END RSA PUBLIC KEY-----

Configuring home

A house.

Now, login to your home server and we’ll set it up in a similar way.

You can find my full example configuration for the home server on GitHub.

Config file

Make a tinc.conf file with the following contents:

Name = home
AddressFamily = ipv4
Interface = tun0
ConnectTo = cloud

Similar to cloud, this defines the name, address type and interface for tinc. However, in addition, this also tells it to make a connection to the server called cloud.

Up and down scripts

Like last time we’ll make a the tinc-up and tinc-down scripts.

Your tinc-up script would be:

ifconfig $INTERFACE 10.0.0.2 netmask 255.255.255.0

And your tinc-down would be the same with:

ifconfig $INTERFACE down

Once again, make these scripts executable by running chmod +x tinc-*.

Public / private keys

Make the hosts directory with mkdir hosts, generate the keys with tincd -c . -K then add the Subnet to your hosts/home file so it looks like the following:

Subnet = 10.0.0.2/32

-----BEGIN RSA PUBLIC KEY-----
[the generated public key for the home server]
-----END RSA PUBLIC KEY-----

Telling the servers about each other

The final step is to tell the servers about each other. The information that each server needs about the other is all contained in the hosts directory. In these host files the Subnet tells us which IP each server should have once connected while the public key is used to secure the connection and verify we’re actually communicating with the server we expect!

This step is pretty easy to do, we just need to make the hosts directorys the same across both systems! Just copy your hosts/cloud file from cloud to the hosts on home, then copy your hosts/home file from home to hosts on cloud.

Testing everything works

Finally, the moment of truth! Run tincd on both of your servers.

You might benefit from initially running in no-detach mode with tincd -D. If you hit Ctrl + c this will increase the log level of the server so you can see all the messages between the two servers. To exit, hit Ctrl + \.

If everything worked correctly you should be able to access home from 10.0.0.2 on cloud. You might want to try initiate an SSH connection or access a web service from home, e.g.: ssh [you]@10.0.0.2 or wget 10.0.0.2.

Nice one! You’ve now setup a VPN between a cloud server and your home server behind your restricted home network. Next up you could try setting up an HTTP proxy on your cloud server to access your home services from anywhere.

In November of last year I finished my Software Engineering degree and moved to Wellington to start work at Flux Federation. Flux builds software that runs energy companies like Powershop - an energy retailer in New Zealand, Australia and the United Kingdom. I work as part of the team that builds the Rails app that brings the power to the people.

The Flux logo.

My first few months were spent in training on what Flux calls the Dev Train.

The Dev Train

I had never done anything with Ruby before, so it started with building a command line game of Hangman in Ruby, followed by a Rails version.

A screenshot of my command line Hangman game.
My first Ruby program, command line Hangman!

While building these myself and the other Dev Trainees had code review sessions with Dev Train mentors. These sessions taught us how to best design software systems and write clean, maintainable code.

Next, I made a Rails version of the Ticket to Ride boardgame. I approached this using test driven development (TDD). This basically means you write some tests for new features, then write the code that makes the tests pass, then refactor the code and tests.

Testing!

For me, one of the biggest take aways of the Dev Train was that testing is important! Yes, tests can take a lot of time to write, but especially in a large, complex codebase they become absolutely essential to ensure everything is still ticking along as it should.

For Ticket to Ride I wrote unit tests in RSpec and integration tests with Cucumber.

RSpec

The RSpec logo.

RSpec allowed me to test the public methods in my controllers, models and services. I found the ability to mock out dependant services incredibly useful. This ensures your specs are only concerned with the code under test, rather than all it’s dependencies. Mocking means that rather than spending time setting up situations where a depdendant service will return a particular result, you can just mock it to return a particular result and make assertions about how the code under test should respond.

For example, I have a ClaimRouteController which calls a ClaimRoute service. In my specs for ClaimRouteController I mock out ClaimRoute and assert that when ClaimRoute returns errors that these errors will be added to the flash to be displayed to the user. This means I don’t have to do the work of setting up a situation where ClaimRoute will fail when all I care about is that the controller handles errors appropriately.

Cucumber

The Cucumber logo.

Cucumber allows you to automate the process of a user testing out all the features of your application in a browser. A Cucumber feature consists of a series of steps to run the test.

A Cucumber feature is pretty easy to read. For example, as below, the Claim Route feature sets up a game with some train pieces and train cars, then proceeds through the steps to claim a route between two cities. This feature is successful if after doing this the list of routes shows this route as being claimed by the player.

Each of these steps has a corresponding step definiton. Step definitions are the code behind the scenarios that executes the step. You can use regular expressions in the names of steps to pass parameters to step definitions, keeping steps sounding natural and reusable.

My Ticket to Ride Implementation

It’s fair to say I went overboard on testing my Ticket to Ride game. Despite having ~250 RSpec examples and a further 12 Cucumber scenarios, my game only got as far as allowing users to:

  • Setup a game.
  • Draw additional train cars.
  • Spend train cars and train pieces to claim a route.

Either way, you can checkout the code on GitHub and some screenshots of the app below.

Game Board

A screenshot of my Rails Ticket to Ride game's board.

Claiming a route

A screenshot of my Rails Ticket to Ride game's claim a route page.

An Awesome Learning Culture

I’m now finished the Dev Train and onto ‘real work’ now, but that doesn’t mean the learning stops! With the tech industry moving so fast there is always something new to learn. Flux has an awesome learning culture; each week a few members of the crew give a talk about something that interests them, and 10% of our time goes towards learning new things with Hackdays projects.

I’m really enjoying my time at Flux and I’m looking forward to seeing what’s in store for the next few years!

Edit, 29 Oct 2017: The software dev arm of Powershop split out into a company called Flux Federation a few months ago. It’s still the same awesome place to work so I’ve replaced references to Powershop with Flux.