For a few years now I’ve been wanting to organise a family photo and video archive. Digital content is much easier to deal with, but my family members still have boxes of old photos sitting around in their cupboards. In our case, this is basically everything prior to the year 2000.

Capturing the content

A stack of photos.

These physical photos degrade over time and are vulnerable to being destroyed in a disaster. This is an easy problem to solve by scanning the photos so we have a digital copy.

But the photos alone are worthless. It’s the associated memories that make them valuable. In an archive then, it’s essential that the who, what, where and when of each memory is identified. This is difficult, and even more time consuming than the digitisation process, but I think it’s worth it! Whats the point in having a picture if you don’t know what its about?

Making the content accessable

A diagram that represents sharing.

Capturing the content is time consuming but relatively simple. But why bother doing this if no one can enjoy it? Making the archive accessable to the whole family is arguably the most important part, and it’s currently proving to be the most challenging.

This solution must:

  1. Make sharing and accessing content with the entire family easy, ideally with a web link.
  2. Be able to show photos alongside their descriptions and other metadata.
  3. Allow content to be structured in a way that makes sense for families, like filtering by a branch of the family or a type of event.
  4. Be able to contain videos, some of which may be long (potentially several hours worth of home videos!)
  5. Ensure I retain control of the data to so it will be in a usable form years into the future.
  6. Be secure so only my family members have access.

Flickr came close to this with their “family” and “friends” sharable URL’s, but this meant loosing full control of the data and made finding content difficult.

Let me know if you have any suggestions of solutions, but at this stage I’m looking at building a system of my own that can fufil these requirements.

In the 21st century we have an amazing opportunity to preserve these memories effectively. It will be amazing when future generations can so easilly look back at the past. Time is of the essence so get out there and ensure these memories are captured properley!

I wrote a poem as inspiration, enjoy!

My colour fades as I’m forgotten.
They used to visit me, smiling as we reminisced.
But its been many moons since I’ve been missed.

I anxiously wait for my eventual doom.
Sitting here, in this small dark room.
Have I slipped their mind?
Will they leave me behind?

The bright light hits.

My time has finally come.
But don’t be sad, this is the best possible outcome.
Released from my fragile form, I flutter to the clouds.

This is my next chapter.
A digital life is the answer.
In my binary form, I will live forever on.

I have a Raspberry Pi home server that I can remotely access through a tinc VPN tunnel with my VPS. Most services can be accessed through the tunnel with addresses like, but some of them are only available on my local network, for example, SMB and SSH. Accessing these services from the local network is actually more difficult than the services available remotely for two reasons:

  1. The local IP address may change. Obviously you can set up a MAC address based IP reservation on most routers, but one of my original goals for this system was that it should be able to work even in situations where I don’t have control of the router.

    A representation of local IP addresses for a Raspberry Pi being changed.
  2. Differences between networks. The local address space (e.g.: 192.168.1.xx) varies between networks and the address you want may already be taken. As a result, changing networks means changing your computers configuration to access the Pi at it’s new address.

    Three routers all representing different networks and types of addresses.
  3. IP addresses are hard to remember. There is a reason DNS was invented! DNS allows a friendly URL to map to the underlying IP address.


My solution is to run a DNS server on my VPS that fetches the IP address from the Pi through the VPN tunnel when a request is made. This means to access my Pi through the local network, I just punch in an address like and I can access it no matter what it’s local IP address is!

A diagram showing the information flow between the computer, local address DNS server and Raspberry Pi.

The DNS server that runs on my VPS server is called local-address-dns. This runs a DNS server using dnsd and upon an incoming DNS request it connects to local-address-dns-client-rpi running on my Pi. local-address-dns-client-rpi is a simple web server that returns the Pi’s IP address on one of it’s network interfaces. An NS record on my domain points addresses like to my VPS to be handled by local-address-dns.

A connection dialog with the address entered.
Connecting to SMB on the Pi. I don't ever need to worry about what the Pi's IP address is!

My parents just got two IP cameras. These are cameras that connect to your home network to allow access over a computer network.

An IP camera.

I had some requirements to make the IP cameras useful:

  1. Remote Access. Obviously, an IP camera that can only be accessed from the local network isn’t very useful if you want to check up on the home! The cameras should be accessable from anywhere in the world via an internet connected device.

  2. Security. The internet is full of stories of the dangers of unsecure IP cameras, such as the website allowing public access to thousands of them. I want all access to the cameras to be encrypted and require authentication.

    A diagram of a secure connection to a camera.
  3. Compatibility. The IP camera’s built-in web interface requires a browser plugin. I couldn’t even get this going on my computer, let alone a mobile device. Accessing the cameras should be possible from any web browser and mobile devices.

  4. Recording. The footage from the cameras should be recorded. Obviously, these cameras aren’t being watched 24/7 so their footage should be able to be retreived at a later point if a need arises.

    A recording icon.
  5. Simplicity. Last, but not least, access to the cameras should be simple. My way of judging this is that I should be able to get into the cameras on any computer without needing a bookmark or note.

Finding a solution

I discovered that the cameras we have run a Real Time Streaming Protocol (RTSP) server so the awful web interface is optional. Interestingly, despite the web interface being password protected, the RTSP stream is wide open!

My first thought was to build a web interface of my own that provides security and HTML5 support by transcoding the camera stream. This could have been a fair bit of work and doesn’t address the recording requirement.

However, I then discovered AngelCam. This service connects to your IP cameras and lets you access them from a browser or their iOS/Android app. In addition to meeting my requirement of recording, this also means all the recordings are safe off-site, making it difficult to wipe the footage.

A censored image of the cloud recordings page.
A screenshot of the AngelCam recordings page. The footage preview has been pixelated intentionally.

There are also plenty of other optional features like public broadcasts, timelapses, and a few more features in the works, like licence plate recognition or alerts when a line is crossed.

Setting up AngelCam

There are a few options to connect cameras to AngelCam.

  • AngelCam ready camera. Some cameras have built in support for AngelCam, not mine. Next!
  • Providing the camera URL. You can directly provide the URL of the camera to AngelCam. Most likely this won’t be encrypted and may rely on portforwarding and a dynamic DNS service.
  • AngelBox. AngelCam sells AngelBox, a Raspberry Pi loaded with AngelCam’s software. This sets up a secure connection between your cameras and AngelCam’s servers, meaning they don’t need to be publicly accessable.
  • Arrow client. The Arrow client allows you to use your own computer like an AngelBox through the open source code.

As we’ve got a home server, I wanted to use this as the Arrow client. I made a Docker image (jordancrawford/arrow-client) to run the Arrow client easily and on any operating system.


AngelCam meets all my requirements as it provides secure remote access and works on any device. It is simple to access, only requiring visiting the AngelCam website and remembering the username and password, and it provides recording.

AngelCam has worked well over the last few months, and I’m looking forward to seeing their feature set expand over time.

The AngelCam app with the two cameras present.

Please note: I have no affiliation with AngelCam, and I receive no benefit from this post, it’s just simply the solution I discovered.

Edit, 27 Dec 2016: Edited to reflect the fact that AngelCam has now removed their free plan. It’s likely still good value for the convenience, but as we have a home server I’m now considering self-hosted alternatives.

Accessing home services from anywhere, without port forwarding!

My Raspberry Pi is setup as a home server, providing me access and control of my content through several services:

  • A Deluge server for torrent downloads.
  • A Plex server to manage and stream my media collection.
  • A Pydio server for remote access and management of files.

This is great, but I want to access my content when I’m away from home. Previous experience with remote access solutions inspired some requirements:

  1. I should be able to access my services from any computer in the world.
  2. I should be able to punch in an easy to remember web address like This means all standard HTTP/HTTPS ports should be used!
  3. All transmission of my content should be encrypted.
  4. Moving house or ISP shouldn’t break my remote access.

Problems with home internet connections

A house.

The hardest requirement to cater to is that moving house or ISP shouldn’t break my remote access. Why is this such an issue?

May not have a static IP

Very rarely does a home connection come with a static IP. Getting a static IP typically results in extra charges, and would require updating the DNS record if anything changes. The solution is to use a dynamic DNS service which solves both these issues, but we still end up with the below problems.

May not have control of the router

Port forwarding needs to be setup on your router so incoming connections are forwarded to the home server. However, you may not have full control over the router to setup these rules. Many routers will also take the default HTTP/HTTPS ports for their own services, leaving you with non-standard port numbers for everything else.

May not be able to get incoming connections

Finally, once we’ve sorted out everything above, you may not be able to get incoming traffic to reach your house! As a result of the IPv4 address shortage and ISP firewalls, incoming connections don’t always work. Never fear, we can still tunnel traffic through the internet with the help of another computer with a more accessible connection.

Tunnelling out

A diagram with a Raspberry Pi in the home network connected via a tunnel to the proxy server, allowing access over the internet with a client.

The concept of using a tunnel is pretty simple. We may not be able to get incoming connections to the home server, but the home server can setup an outgoing tunnel connection with some other machine on the internet. This other machine can be accessed from anywhere and forward connections through the tunnel to get to the home server.

The Other Machine

To meet my crazy requirements above, some other machine needs to be involved. There are services dedicated to providing remote access to your networks like Hamachi. These work well but require client software to join the virtual network, not meeting my first requirement of working on any computer!

Other than that, there are virtual private network (VPN) services which provide port forwarding, however it’s unlikely you’d be able to use the HTTP and HTTPS ports.

Last resort, DIY! A virtual private server (VPS) is a cheap way to get a small cloud server with a decant connection and its very own IPv4 address! For this I grabbed a VPS server from Vultr, whose cheapest server has more than enough grunt to provide remote access.

Reverse SSH Tunnel

A common way to get remote access through a firewall is with a Reverse SSH Tunnel. This is easy to setup and works well, but I discovered that HTTP based services through the tunnel run extremely slow. The most likely reason for this is that both SSH and HTTP use the TCP protocol to transmit data over a network. TCP ensures a reliable connection with built-in error checking and transmission control but this comes at a cost of speed. Running HTTP through the SSH tunnel is performing these error checks twice, resulting in much slower speeds.

The answer is to switch to something UDP based. UDP is much faster because it’s just packets sent straight over the network without error checking or flow control. A UDP based tunnelling solution means that only the HTTP layer is performing these extra tasks.

tinc VPN

The tinc VPN software was the answer. tinc can be used to create virtual networks between computers. It utilises UDP so runs quickly, all traffic is encrypted, and it’s continually re-checking the status of its VPN connection so works well even on unreliable connections.

I posted an article about how to configure tinc for this purpose, but this DigitalOcean tutorial is good resource too. This involves setting up the network’s configuration, generating key-pairs and copying the key-pairs between the machine.

Both my VPS and Raspberry Pi run Docker so I created rpi-tinc for the Raspberry Pi and used jenserat’s tinc for the VPS.

Pretty URL’s

Proxying Connections

With tinc working, all the services on the home server can be accessed through a local IP on the VPS, like for Deluge. Time to turn that into something nice like!

The subdomains point to the VPS’s IP address. The VPS has a NGINX server running with the official NGINX Docker image. NGINX is setup as a proxy server to the home server’s IP address using the NGINX documentation, meaning external traffic is forwarded through the tinc link to the home server.


The LetsEncrypt logo.

HTTPS is used to ensure all data transport is encrypted. This requires valid SSL certificates which can be obtained for free with Let’s Encrypt through their automated verification process. To setup Let’s Encrypt to automatically renew I used bringnow’s docker-letsencrypt-manager and shared the volumes with the NGINX container. A very useful tool when working with HTTPS is SSL Labs’ SSL Test.

Using Plex

A house.

Plex is a powerful media server, allowing access to your media from anywhere. Plex offers Relay, a feature that allows tunnelling into your home server in a similar way to my setup. This works well, but does have bandwidth limitations. To setup MyPlex through the tinc tunnel I used this Gist to help with the NGINX configuration, and setup a custom URL in my Plex settings, like: (the port number must be explicitly defined for it to work). This works well, but I did have difficulties when Chromecasting from Android while on an external network, hopefully Plex will release a fix for this.

Take Away

While my original requirements were pretty over the top, I’m happy I have a solution which satisfies them. I have been using this setup reliably for about three months and having an easy to remember web address for all my home services is great. This process has taught me a lot about Docker, NGINX, and networking, much of which has already become useful in other contexts.

Edit, 29 Oct 2017: Added a link to my new tinc setup guide.

My home server powered by Pi and Docker

I was in need of a server that gives me remote access to my files, can run Plex, torrent and is quiet and efficient enough that it can run 24/7.

The Raspberry Pi 2 board.

The answer? A Raspberry Pi of course! I brought a Raspberry Pi 2 for about $60 NZD (soon to replaced by a Pine 64!).

What is Docker?

The Docker logo.

I used Docker to manage the services on the Raspberry Pi. Docker containerises services allowing them to be shared and scaled easily. For me, the main benefit was that setting up a service is as easy as downloading a Docker image rather than manually installing packages and making configuration changes.

Core Concepts

An image is the starting point to setup a service. These typically come from Docker’s repository for images called Docker Hub, but failing that you can always setup an image of your own. An image includes the operating system and all other software and configuration needed to run the service.

A container is an instance of an image. Each runs independently with individual file systems and environment variables. You can run multipule containers for the same service without them clashing.

While this sounds like a performance nightmare, Docker doesn’t actually run containers as virtual machines but they instead share the same kernel. This means that even on a little device like the Raspberry Pi you can still run plenty of containers at once, even hundreds.

Running Docker on the Raspberry Pi

A Docker + Pi combination. Image by Hypriot.

The Docker community for systems with ARM processors is growing. A big name in this space is Hypriot with HypriotOS. This is a lightweight operating system for your Raspberry Pi with Docker built in.

Only Docker images built specifically for ARM processor’s will work on the Pi. Usually adding rpi as a search term in Docker Hub helps find compatible images.

Setting up an external hard drive

I chose the EXT4 file system as this doesn’t require additional software to be installed on the Pi and compatibility with other systems isn’t a major requirement for me. I setup the system to auto mount the drive using it’s UUID and setup the drive to automatically go to sleep using hd-idle.

Managing service configuration

The Docker Compose logo.

I used Docker Compose to manage services on the Pi. Docker Compose works as a layer on top of the Docker command line meaning you can start and manage the entire stack with a single command.

The Docker Compose file containing the services stack can be stored in Git so you always have access to your working configuration. Re-installing is easy, just clone the repository and run docker-compose up -d, then wait as the images download.



Pydio running on my Raspberry Pi.

Pydio is like Google Drive for your personal content. You can get access to it from anywhere in the world, give people accounts to access your content, or share a folder to anyone with a public link.

To get Pydio running I created an image of my own at jordancrawford/rpi-pydio-docker based off kdelfour/pydio-docker.


Plex running on my Raspberry Pi.

Plex is a server for your personal media collection. It collects brilliant metadata about your content and allows it to be played pretty much anywhere with the web, desktop, TV and mobile apps. To get Plex running I used greensheep/plex-server-docker-rpi.

Plex runs well, and will transcode most files for use in the Plex web-app or Rasplex, but some formats have issues like H.265 files. Everything works well on a more capable client like a computer or phone.


Deluge running on my Raspberry Pi.

Deluge is a torrent download server. The best thing about Deluge is how client-server oriented it is. You can connect to a Deluge server through the desktop app or the web interface. Torrent is great from the Raspberry Pi because it’s always wired into the network and is on 24/7.

To get Deluge running I created my own image at jordancrawford/rpi-deluge.


A Samba volume mounted on my computer.

Samba allows you to connect directly to the drive from other computers using SMB. I only use Samba on my local network, with Pydio taking over for remote situations.

To get Samba going I used dastrasmue/rpi-samba.

A Blackmagic Speed Test of the drive connected with Samba (and my computer via gigabit ethernet)

The drive access speeds aren’t amazing through Samba, but I’m hoping this will improve with the Pine 64’s gigabit ethernet and faster CPU.

Getting remote access?

Services can be accessed over the local network at a particular port or port forwarded to the internet, but a domain like beats 123.456.789.123:2016 anyday!

My next post covers how I got remote access to my Pi without any port forwarding or a static IP.

Take Away

I’ve learnt a few things from this experience:

  • Docker is simple to use once an image has been made, but making an image of your own can take much longer than you might expect.
  • Directly connected storage is significantly better. Initially, the Raspberry Pi was connected to my old Time Capsule unit using Docker Volume Netshare. This worked well but in the end is a much more complicated solution than required.
  • Have a good look through Docker Hub before creating a Docker image. There are plenty of images out there you just need to look hard, and of course if you do implement your own service, please release it for everyone to use!