Accessing home services from anywhere, without port forwarding!

My Raspberry Pi is setup as a home server, providing me access and control of my content through several services:

  • A Deluge server for torrent downloads.
  • A Plex server to manage and stream my media collection.
  • A Pydio server for remote access and management of files.

This is great, but I want to access my content when I’m away from home. Previous experience with remote access solutions inspired some requirements:

  1. I should be able to access my services from any computer in the world.
  2. I should be able to punch in an easy to remember web address like deluge.crawford.kiwi. This means all standard HTTP/HTTPS ports should be used!
  3. All transmission of my content should be encrypted.
  4. Moving house or ISP shouldn’t break my remote access.

Problems with home internet connections

A house.

The hardest requirement to cater to is that moving house or ISP shouldn’t break my remote access. Why is this such an issue?

May not have a static IP

Very rarely does a home connection come with a static IP. Getting a static IP typically results in extra charges, and would require updating the DNS record if anything changes. The solution is to use a dynamic DNS service which solves both these issues, but we still end up with the below problems.

May not have control of the router

Port forwarding needs to be setup on your router so incoming connections are forwarded to the home server. However, you may not have full control over the router to setup these rules. Many routers will also take the default HTTP/HTTPS ports for their own services, leaving you with non-standard port numbers for everything else.

May not be able to get incoming connections

Finally, once we’ve sorted out everything above, you may not be able to get incoming traffic to reach your house! As a result of the IPv4 address shortage and ISP firewalls, incoming connections don’t always work. Never fear, we can still tunnel traffic through the internet with the help of another computer with a more accessible connection.

Tunnelling out

A diagram with a Raspberry Pi in the home network connected via a tunnel to the proxy server, allowing access over the internet with a client.

The concept of using a tunnel is pretty simple. We may not be able to get incoming connections to the home server, but the home server can setup an outgoing tunnel connection with some other machine on the internet. This other machine can be accessed from anywhere and forward connections through the tunnel to get to the home server.

The Other Machine

To meet my crazy requirements above, some other machine needs to be involved. There are services dedicated to providing remote access to your networks like Hamachi. These work well but require client software to join the virtual network, not meeting my first requirement of working on any computer!

Other than that, there are virtual private network (VPN) services which provide port forwarding, however it’s unlikely you’d be able to use the HTTP and HTTPS ports.

Last resort, DIY! A virtual private server (VPS) is a cheap way to get a small cloud server with a decant connection and its very own IPv4 address! For this I grabbed a VPS server from Vultr, whose cheapest server has more than enough grunt to provide remote access.

Reverse SSH Tunnel

A common way to get remote access through a firewall is with a Reverse SSH Tunnel. This is easy to setup and works well, but I discovered that HTTP based services through the tunnel run extremely slow. The most likely reason for this is that both SSH and HTTP use the TCP protocol to transmit data over a network. TCP ensures a reliable connection with built-in error checking and transmission control but this comes at a cost of speed. Running HTTP through the SSH tunnel is performing these error checks twice, resulting in much slower speeds.

The answer is to switch to something UDP based. UDP is much faster because it’s just packets sent straight over the network without error checking or flow control. A UDP based tunnelling solution means that only the HTTP layer is performing these extra tasks.

tinc VPN

The tinc VPN software was the answer. tinc can be used to create virtual networks between computers. It utilises UDP so runs quickly, all traffic is encrypted, and it’s continually re-checking the status of its VPN connection so works well even on unreliable connections.

I setup tinc using DigitalOcean’s tutorial which involves setting up the network’s configuration, generating key-pairs and copying the key-pairs between the machine. Both my VPS and Raspberry Pi run Docker so I created rpi-tinc for the Raspberry Pi and used jenserat’s tinc for the VPS.

Pretty URL’s

Proxying Connections

With tinc working, all the services on the home server can be accessed through a local IP on the VPS, like 10.0.0.2:8112 for Deluge. Time to turn that into something nice like deluge.crawford.kiwi!

The subdomains point to the VPS’s IP address. The VPS has a NGINX server running with the official NGINX Docker image. NGINX is setup as a proxy server to the home server’s IP address using the NGINX documentation, meaning external traffic is forwarded through the tinc link to the home server.

HTTPS

The LetsEncrypt logo.

HTTPS is used to ensure all data transport is encrypted. This requires valid SSL certificates which can be obtained for free with Let’s Encrypt through their automated verification process. To setup Let’s Encrypt to automatically renew I used bringnow’s docker-letsencrypt-manager and shared the volumes with the NGINX container. A very useful tool when working with HTTPS is SSL Labs’ SSL Test.

Using Plex

A house.

Plex is a powerful media server, allowing access to your media from anywhere. Plex offers Relay, a feature that allows tunnelling into your home server in a similar way to my setup. This works well, but does have bandwidth limitations. To setup MyPlex through the tinc tunnel I used this Gist to help with the NGINX configuration, and setup a custom URL in my Plex settings, like: https://plex.crawford.kiwi:443 (the port number must be explicitly defined for it to work). This works well, but I did have difficulties when Chromecasting from Android while on an external network, hopefully Plex will release a fix for this.

Take Away

While my original requirements were pretty over the top, I’m happy I have a solution which satisfies them. I have been using this setup reliably for about three months and having an easy to remember web address for all my home services is great. This process has taught me a lot about Docker, NGINX, and networking, much of which has already become useful in other contexts.

My home server powered by Pi and Docker

I was in need of a server that gives me remote access to my files, can run Plex, torrent and is quiet and efficient enough that it can run 24/7.

The Raspberry Pi 2 board.

The answer? A Raspberry Pi of course! I brought a Raspberry Pi 2 for about $60 NZD (soon to replaced by a Pine 64!).

What is Docker?

The Docker logo.

I used Docker to manage the services on the Raspberry Pi. Docker containerises services allowing them to be shared and scaled easily. For me, the main benefit was that setting up a service is as easy as downloading a Docker image rather than manually installing packages and making configuration changes.

Core Concepts

An image is the starting point to setup a service. These typically come from Docker’s repository for images called Docker Hub, but failing that you can always setup an image of your own. An image includes the operating system and all other software and configuration needed to run the service.

A container is an instance of an image. Each runs independently with individual file systems and environment variables. You can run multipule containers for the same service without them clashing.

While this sounds like a performance nightmare, Docker doesn’t actually run containers as virtual machines but they instead share the same kernel. This means that even on a little device like the Raspberry Pi you can still run plenty of containers at once, even hundreds.

Running Docker on the Raspberry Pi

A Docker + Pi combination. Image by Hypriot.

The Docker community for systems with ARM processors is growing. A big name in this space is Hypriot with HypriotOS. This is a lightweight operating system for your Raspberry Pi with Docker built in.

Only Docker images built specifically for ARM processor’s will work on the Pi. Usually adding rpi as a search term in Docker Hub helps find compatible images.

Setting up an external hard drive

I chose the EXT4 file system as this doesn’t require additional software to be installed on the Pi and compatibility with other systems isn’t a major requirement for me. I setup the system to auto mount the drive using it’s UUID and setup the drive to automatically go to sleep using hd-idle.

Managing service configuration

The Docker Compose logo.

I used Docker Compose to manage services on the Pi. Docker Compose works as a layer on top of the Docker command line meaning you can start and manage the entire stack with a single command.

The Docker Compose file containing the services stack can be stored in Git so you always have access to your working configuration. Re-installing is easy, just clone the repository and run docker-compose up -d, then wait as the images download.

Services

Pydio

Pydio running on my Raspberry Pi.

Pydio is like Google Drive for your personal content. You can get access to it from anywhere in the world, give people accounts to access your content, or share a folder to anyone with a public link.

To get Pydio running I created an image of my own at jordancrawford/rpi-pydio-docker based off kdelfour/pydio-docker.

Plex

Plex running on my Raspberry Pi.

Plex is a server for your personal media collection. It collects brilliant metadata about your content and allows it to be played pretty much anywhere with the web, desktop, TV and mobile apps. To get Plex running I used greensheep/plex-server-docker-rpi.

Plex runs well, and will transcode most files for use in the Plex web-app or Rasplex, but some formats have issues like H.265 files. Everything works well on a more capable client like a computer or phone.

Deluge

Deluge running on my Raspberry Pi.

Deluge is a torrent download server. The best thing about Deluge is how client-server oriented it is. You can connect to a Deluge server through the desktop app or the web interface. Torrent is great from the Raspberry Pi because it’s always wired into the network and is on 24/7.

To get Deluge running I created my own image at jordancrawford/rpi-deluge.

Samba

A Samba volume mounted on my computer.

Samba allows you to connect directly to the drive from other computers using SMB. I only use Samba on my local network, with Pydio taking over for remote situations.

To get Samba going I used dastrasmue/rpi-samba.

A Blackmagic Speed Test of the drive connected with Samba (and my computer via gigabit ethernet)

The drive access speeds aren’t amazing through Samba, but I’m hoping this will improve with the Pine 64’s gigabit ethernet and faster CPU.

Getting remote access?

Services can be accessed over the local network at a particular port or port forwarded to the internet, but a domain like plex.example.com beats 123.456.789.123:2016 anyday!

My next post covers how I got remote access to my Pi without any port forwarding or a static IP.

Take Away

I’ve learnt a few things from this experience:

  • Docker is simple to use once an image has been made, but making an image of your own can take much longer than you might expect.
  • Directly connected storage is significantly better. Initially, the Raspberry Pi was connected to my old Time Capsule unit using Docker Volume Netshare. This worked well but in the end is a much more complicated solution than required.
  • Have a good look through Docker Hub before creating a Docker image. There are plenty of images out there you just need to look hard, and of course if you do implement your own service, please release it for everyone to use!
Creating Osprey Launcher

Over the summer of 2014 I decided to enhance our home theatre PC’s with a launcher that simplifies the use of media player apps and web content, while saving time and energy.

The Problem

The world of home entertainment is changing rapidly, especially with the advent of connected devices like the Apple TV, Roku and other ‘smart TV’ solutions combined with the ever-changing world of cloud services.

However, for now, the best all-round solution for home entertainment remains to be home theatre PC’s. At home we’ve got two HTPC’s and a server. I’ll get to the server another day, but for us the best solution is a combination of three programs:

The three media center applications to run on the home theatre PC; Plex, MediaPortal and Kodi.

  • Plex, for playback of local media collections

  • MediaPortal, for playback of recorded broadcast TV

  • Kodi, for streaming

Kodi is actually capable of doing it all, using the PleXBMC and MediaPortal addon, but you really can’t beat the native experience of each app.

Without a mouse and keyboard, it’s hard to switch between all these apps. Also, waiting for each one to start up every time becomes frustrating.

This led to the following requirements:

  • Ability to switch between applications using a remote
  • Switching between applications should be fast
  • Applications should be able to be added easily
  • A user interface that be used for other helpful information

Development Process

The project was split into two parts. The frontend is all about giving a responsive and clear interface for the user, while the backend manages the applications and sends commands to the operating system.

Frontend

I started out with a simple Window’s Forms interface made up of a few circles. I quickly found that Windows Forms are limited in the visual effects they offer and aren’t very easy to work with.

The solution was to embed a web interface into the app. Why a web interface? You can make just about anything in CSS, the interface can be animated and used visual effects like shadows and rounding, and there is plenty of resources and frameworks available to make the job easier.

Also, because the web interface sits on the home server it is very easy to update the available applications rather than recompiling and redistributing.

Tools of the Trade

Icons of the development frameworks used in the frontend; Grunt, Angular and Bootstrap.

Over summer I was working on another project in AngularJS so I decided I’d build this in Angular too. Angular makes it easy to ensure the interface is always up to date with the model.

I also used Yeoman to setup all the basics, such as the Bower package manager, and Grunt for task running.

Using Bootstrap allowed me to get started easily, as well as make the frontend responsive so it would work well for different resolution TV’s.

Design

I’m no interaction design expert, but with enough iterations the interface got to a point where it’s both easy to use and flexible.

The first revision of the user interface with a horizontal layout.

The initial layout looked pretty nice when it was built, however I got to the stage where I wanted to be able to add more applications and have the interface more responsive to different screen sizes.

I decided to switch to an Apple TV inspired grid layout. This was easily responsive and could hold as many applications as I needed by scrolling the page when it got to the bottom.

The second revision of the user interface with a grid style layout.

The backdrop of the launcher was taken by Avery Photography.

Embedding the user interface

Windows Form’s has a built in web browser but it’s based on Internet Explorer. Enough said, it was time to embed Chrome. CEF is an embeddable version of Chrome, and CefSharp allows CEF to be embedded within a C# application!

This also allows me to call JavaScript functions from C#, and call C# methods from JavaScript, which is much easier than communicating through sockets or HTTP calls.

Backend

The backend’s job is to manage applications based on commands from the frontend.

Language of Choice

Our HTPC’s both run Windows 7 and I’ve used C# before, so this was my language of choice.

Design Patterns

Having just completed an object oriented programming paper at university I was keen to put this to use with some design patterns.

Without these design patterns, my source code would become a mess and any future changes would become painful.

A Facade is used to handle interactions between other classes, acting as the one class every other class can talk through. For example, if the FrontendBridge gets a message from the frontend to open an application, the facade finds the application and sends it the open message.

A class diagram of the Facade design pattern.

State ensures that the state of an application doesn’t change the way we interact with it. Other classes have idea what state an application is in, they just send it commands. This makes implementing new states easy, and saves plenty of ‘if’ statements.

This is used by storing an ApplicationInstance in LaunchableApplication, and a method to change the current application instance (for state transitions).

Singleton’s are very necessary for implementing a Facade or the FrontendBridge (things there are only ever going to be one of). A singleton is a static method that initialises an instance of a class or just returns the single instance of the controller if an instance already exists.

Suspending Functionality

One of the most important aspects of this project is to be able to switch between applications quickly. The computers I was using aren’t particularly fast, so waiting 20 seconds for an application to launch gets annoying!

One option is to just keep all applications open which would work fine on some computers, but this would have a nasty effect on these machines. These computers are also left on all day and night, so reducing the amount of CPU will be better for my parents power bill!

The best solution for me was a command line utility called PsSuspend. Use of this is very simple, just launch PsSuspend.exe with the process ID to suspend, and add the -r flag to resume.

PsSuspend completely freezes an application so that it uses 0% CPU, but it remains completely open in memory.

Resuming a process brings it back to life instantly. I wired up the PsSuspend command up to to the close() method of my OpenApplication state (switches to the suspended state), and to resume when the SuspendedApplication state receives a launch().

A demo of the media centre application suspending functionality in Osprey Launcher.

Application Switching

Moving an application (or the launcher) to the front of the screen turned out to be harder than I expected.

For the longest time I was having an issue where an application could be launched fine, but would not return to the launcher if the application had received any keypresses. Even weirder was this only happened on the HTPC’s but not in my development environment. I even asked about the issue I was having on Stack Overflow, but gladly found the answer myself in the end.

The difference was that I was running with debugging while the other machines weren’t. The breakthrough came from reading a tutorial on CodeProject which explains the rules for getting focus from another window. If an application is being debugged then it is allowed to switch back to itself. The solution was in a blog article which avoided the limitations by programmatically pressing the ‘alt’ key.

Catching remote button presses

I had trouble catching keyboard events while the launcher was not active, plus I needed support for a specific button on the remote. As a result, the easiest way was to setup EventGhost to make a UDP request to the launcher.

Using EventGhost gives good support for most remotes, and the fact that it’s UDP means a smartphone or other device could also trigger the launcher if needed.

Extensions and Further Development

Osprey Launcher continues to be used by my parents. Eventually I intend to prepare it for open source release, as well as adding a few extra features.

Launching Websites

It’s pretty neat to see how new ideas evolve as software is used, and this feature is a great example.

Originally, CEFSharp was just used to display the frontend. However, with a little modification websites were treated as launchable items, giving many extra uses for the launcher.

A demo of launching web content within Osprey Launcher.

Desktop Mode

Something I really like about the launcher is that it simplifies the whole experience of using the HTPC by hiding out the complications of the operating system behind it.

The Desktop Mode adds to this experience, by treating the operating system below like an application. Apps open in the launcher are completely hidden, even from the Windows taskbar making for a pure Window’s experience while in desktop mode.

A demo of desktop mode within Osprey Launcher.

Force Close

Things don’t always work perfectly, especially on these slow old computers. So task manager doesn’t need to be launched, I added a ‘Force Close’ option in the context menu of a launchable application.

A demo of the force close feature within Osprey Launcher.

Automatically return to launcher

At most, the computer would only be used a few hours a day. If an application is left open, this can keep the fan on all day until the computer is used next.

To keep CPU cycles low, the launcher will pull out of an application after a few hours.

Frontend Widgets

A preview of widgets in Osprey Launcher showing the remaining disk space, recording status, and current time in the top bar.

The launcher provides a nice way to integrate the HTPC’s with information from the server. I made a few AngularJS directives to provide this information.

  • Number of recordings on MediaPortal today

The MediaPortal install on the home server has MPExtended installed, providing an web API into the MediaPortal database.

Counting the number of results returned by TVAccessService’s GetScheduledRecordingsForToday did the trick.

  • Hard drive space available on server

This displays the available space percentage shown from a PHP script on the server. This one is handy with a home server setup like this, as the server can easily go months without anyone logging into it.

Take Away

As well as creating something neat, this project has been a major learning experience, giving me some insights:

  • Spending time to think about the best class structure is worth it in the end
  • User input provides new ideas and changes, so get it into the hands of users as soon as possible
  • Don’t re-invent the wheel, there are plenty of frameworks and tools available to save time
  • The Windows API is a mysterious beast
Project Launcher

Project Launcher

A small, open-source command line tool for quickly setting up your development environment.

I switch between lots of projects at the moment, so I wanted something that made this easier to do.

I just open a command line and run project [project_name] and it all starts up!

There is still lots of neat things this could do, and it should be made easier to work with other programs so feel free to contribute on BitBucket.

Preview

Preview of Project Launcher operation.
Virtualising an old Mac

A company I help out with their IT stuff is about to get rid of their oldest computer - a 2006, Intel iMac. This is the last machine they’ve got that can run Tiger and Freehand MX, so it’s time to virtualise!

Freehand MX is an old PowerPC application. PowerPC was the processor architecture Mac’s were using until the Intel switch of 2006. Rosetta is a PowerPC to Intel translator that was built into OS X to make this switch easier, however this isn’t around in Lion or later. Going forward, the only real way to use Freehand MX is in a virtualised environment of an older version of OS X.

I want to use VirtualBox for this. VirtualBox is great because its free and works well enough for what I need. Once I had a version of OS X running inside VirtualBox, the next part is to migrate the existing OS X install into the virtual machine.

Migrating from a real partition to a virtual machine

Newer versions of OS X allow Migration Assistant to transfer files over the network. However, in Tiger you need to use Target Disk Mode and a FireWire cable. This just lets you use your Mac like an external hard drive for the other Mac to migrate off, but it’s a little difficult (impossible!) to plug a FireWire cable into a virtual machine.

My VirtualBox install was on the Snow Leopard partition of the iMac with Tiger installed on another partition. Using some VirtualBox utilities, we can create a fake hard drive on the virtual machine that links to the real Tiger partition.

1. Get the disk identifier

Open Disk Utility, select the partition on the left, click Info, then make a note of the “Disk Identifier” of the Tiger partition. In my case, this was disk0s2.

Disk Utility Information window with Disk Identifier highlighted.

2. Eject this partition.

Disk Utility with the Tiger partition ejected on the right.

Now we need to run some commands from the VirtualBox Manual.

Open up Terminal (from the Applications/Utilities folder), and run the below command. Don’t forget to substitute your own path for the .vmdk file and the disk identifier you got from step 1.

VBoxManage internalcommands createrawvmdk -filename [path/to/file.vmdk] -rawdisk /dev/[your disk identifier]

4. Fix permissions

I then had some problems with permissions. To change the permissions on your .vmdk file so VirtualBox can access it, use:

sudo chmod 777 [path/to/file.vmdk]

I also needed to change the permissions of the volume so when the link is followed, VirtualBox can actually read the partition it links to. For this, use:

sudo chmod 777 /dev/[your disk identifier]
The required commands run in Terminal

5. Add the drive

Now go to the settings for your VirtualBox virtual machine. Under “Storage”, click the small button to add a device. Choose “Add Hard Disk”, then “Choose existing disk” and locate the raw raw link disk we made earlier.

The VirtualBox Add Hard Disk menu in the settings for a virtual machine
Add Hard Disk menu options

If all goes well, it should now show up correctly in the VirtualBox list of disks.

The virtual machine storage settings with the new drive added

6. Migrate!

Boot into the virtual OS and you should see the partition in your guest OS’s Disk Utility.

Disk Utility in the virtual machine, showing the real partition

You can now migrate from the old partition.

The virtual machine migrating from the real partition

In conclusion…

As a result of this process, we are no longer tied to old hardware. This means we’ll forever have access to the old environment no matter what hardware and software setup we have!