Last weekend I wanted to see the latest movie trailers. Of course, there are plenty of websites out there for this, however, none of them meet the excitement of watching movie trailers before a film at the cinemas.

Seats at a cinema.

So thats what I set out to build. First, I needed to find a source for movie information. I don’t want to populate this information manually, so I found TheMovieDB has an API. This site is great because there’s a whole community of people dedicated to keeping this information up to date.

The TheMovieDB logo.

I linked in with their API and pulled down details of upcoming and now showing movies. TheMovieDB’s API lets me get a list of movies with one request, however it requires another request to fetch the videos (including trailers) for each movie. TheMovieDB has an API limit of 40 requests every 10 seconds, and if my site got popular it could exceed this and I don’t want to give out my TMDB key to any visitor!

Instead, I setup my VPS with a nightly job that fetches data from TheMovieDB and saves it to an Amazon S3 bucket. This job delays it’s requests to avoid the TMDB API limit and keeps only the movie ID and YouTube trailer ID’s. This method saves bandwidth and time, as visitors to LatestTrailers.net only need to make one request to fetch all the data they need! As the entire site is purely static, everything can be hosted on Amazon S3 for simplicity and scalability.

Next, I used the YouTube Player API to embed trailers. This let me hook into events like when the video has finished so that I can start the next one playing. I was suprised at the level of control they give embedders, allowing me to prevent annotations, video controls and other distractions for a pure video experience.

The LatestTrailers.net interface while playing a trailer.

A core feature of the site is that it only plays trailers you haven’t seen. To keep things simple, I used local storage in the browser to store the list of movies the user has seen. When the user watches all the available trailers, they can clear out their list of seen movies to start over!

The LatestTrailers.net interface when a user has watched all available trailers and has the option to start over.

This simple project took longer than I expected but I’m pleased with the result, and it’s where I’ll go to get my trailers from now on!

Check it out at LatestTrailers.net!

For a few years now I’ve been wanting to organise a family photo and video archive. Digital content is much easier to deal with, but my family members still have boxes of old photos sitting around in their cupboards. In our case, this is basically everything prior to the year 2000.

Capturing the content

A stack of photos.

These physical photos degrade over time and are vulnerable to being destroyed in a disaster. This is an easy problem to solve by scanning the photos so we have a digital copy.

But the photos alone are worthless. It’s the associated memories that make them valuable. In an archive then, it’s essential that the who, what, where and when of each memory is identified. This is difficult, and even more time consuming than the digitisation process, but I think it’s worth it! Whats the point in having a picture if you don’t know what its about?

Making the content accessable

A diagram that represents sharing.

Capturing the content is time consuming but relatively simple. But why bother doing this if no one can enjoy it? Making the archive accessable to the whole family is arguably the most important part, and it’s currently proving to be the most challenging.

This solution must:

  1. Make sharing and accessing content with the entire family easy, ideally with a web link.
  2. Be able to show photos alongside their descriptions and other metadata.
  3. Allow content to be structured in a way that makes sense for families, like filtering by a branch of the family or a type of event.
  4. Be able to contain videos, some of which may be long (potentially several hours worth of home videos!)
  5. Ensure I retain control of the data to so it will be in a usable form years into the future.
  6. Be secure so only my family members have access.

Flickr came close to this with their “family” and “friends” sharable URL’s, but this meant loosing full control of the data and made finding content difficult.

Let me know if you have any suggestions of solutions, but at this stage I’m looking at building a system of my own that can fufil these requirements.

In the 21st century we have an amazing opportunity to preserve these memories effectively. It will be amazing when future generations can so easilly look back at the past. Time is of the essence so get out there and ensure these memories are captured properley!


I wrote a poem as inspiration, enjoy!

My colour fades as I’m forgotten.
They used to visit me, smiling as we reminisced.
But its been many moons since I’ve been missed.

I anxiously wait for my eventual doom.
Sitting here, in this small dark room.
Have I slipped their mind?
Will they leave me behind?

The bright light hits.

My time has finally come.
But don’t be sad, this is the best possible outcome.
Released from my fragile form, I flutter to the clouds.

This is my next chapter.
A digital life is the answer.
In my binary form, I will live forever on.

This post is part of a series about my journey towards a low cost and low power home server.

Other articles:


I have a Raspberry Pi home server that I can remotely access through a tinc VPN tunnel with my VPS. Most services can be accessed through the tunnel with addresses like plex.crawford.kiwi, but some of them are only available on my local network, for example, SMB and SSH. Accessing these services from the local network is actually more difficult than the services available remotely for two reasons:

  1. The local IP address may change. Most routers let you configure a MAC address based IP reservation, however one of my original goals for this system was that it should be able to work even in situations where I don’t have control of the router.

    A representation of local IP addresses for a Raspberry Pi being changed.
  2. Differences between networks. The local address space (e.g.: 192.168.1.xx) varies between networks and the address you want to reserve may already be taken. This means that if you change your network you may also need to change your computers configuration to access the Pi at it’s new address.

    Three routers all representing different networks and types of addresses.
  3. IP addresses are hard to remember. There is a reason DNS was invented! DNS allows a friendly URL to map to the underlying IP address.

    192.168.1.?

My solution is to run a DNS server on my VPS that fetches the IP address from the Pi through the VPN tunnel when a request is made. This means to access my Pi through the local network, I just punch in an address like local.pi.crawford.kiwi and I can access it no matter what it’s local IP address is!

A diagram showing the information flow between the computer, local address DNS server and Raspberry Pi.

The DNS server that runs on my VPS server is something I created called local-address-dns. This runs a DNS server using dnsd and upon an incoming DNS request it connects to local-address-dns-client-rpi running on my Pi. local-address-dns-client-rpi is a simple web server that returns the Pi’s IP address on one of it’s network interfaces. An NS record on my domain points addresses like local.pi.crawford.kiwi to my VPS to be handled by local-address-dns.

A connection dialog with the local.pi.crawford.ord.nz address entered.
Connecting to SMB on the Pi. I don't ever need to worry about what the Pi's IP address is!

This works great for services like SSH and SMB which tend to use a default port, but what about when you run multiple web-based serices off the same machine? For example, if you run Plex on port 32400 and Deluge on port 8112.

In these cases, I’d love to just use a domain name like plex.local.pi.crawford.kiwi over having to remember local.pi.crawford.kiwi:32400!

Theres a few answers to this question. One is to configure NGINX to proxy the service based on the domain name being accessed, or simply to redirect to another URL based on the domain name being accessed.

I went with the redirect based approach and created virtual-host-redirector for this purpose. This makes configuration easy as you just need to setup a rules.json file with host to URL mappings.

With either of these approaches you’ll need to:

  • Make sure you aren’t running anything else on port 80. If you are, just reconfigure this service to another port and setup a redirect rule for it.
  • Configure a wildcard CNAME record with your DNS provider to the local-address-dns managed domain name. For example, mine is a CNAME record of *.local.pi.crawford.kiwi pointing at local.pi.crawford.kiwi.

Edit, 1 July 2018: I’ve added some more details about another project of mine called virtual-host-redirector.

My parents just got two IP cameras. These are cameras that connect to your home network to allow access over a computer network.

An IP camera.

I had some requirements to make the IP cameras useful:

  1. Remote Access. Obviously, an IP camera that can only be accessed from the local network isn’t very useful if you want to check up on the home! The cameras should be accessable from anywhere in the world via an internet connected device.

  2. Security. The internet is full of stories of the dangers of unsecure IP cameras, such as the website allowing public access to thousands of them. I want all access to the cameras to be encrypted and require authentication.

    A diagram of a secure connection to a camera.
  3. Compatibility. The IP camera’s built-in web interface requires a browser plugin. I couldn’t even get this going on my computer, let alone a mobile device. Accessing the cameras should be possible from any web browser and mobile devices.

  4. Recording. The footage from the cameras should be recorded. Obviously, these cameras aren’t being watched 24/7 so their footage should be able to be retreived at a later point if a need arises.

    A recording icon.
  5. Simplicity. Last, but not least, access to the cameras should be simple. My way of judging this is that I should be able to get into the cameras on any computer without needing a bookmark or note.

Finding a solution

I discovered that the cameras we have run a Real Time Streaming Protocol (RTSP) server so the awful web interface is optional. Interestingly, despite the web interface being password protected, the RTSP stream is wide open!

My first thought was to build a web interface of my own that provides security and HTML5 support by transcoding the camera stream. This could have been a fair bit of work and doesn’t address the recording requirement.

However, I then discovered AngelCam. This service connects to your IP cameras and lets you access them from a browser or their iOS/Android app. In addition to meeting my requirement of recording, this also means all the recordings are safe off-site, making it difficult to wipe the footage.

A censored image of the cloud recordings page.
A screenshot of the AngelCam recordings page. The footage preview has been pixelated intentionally.

There are also plenty of other optional features like public broadcasts, timelapses, and a few more features in the works, like licence plate recognition or alerts when a line is crossed.

Setting up AngelCam

There are a few options to connect cameras to AngelCam.

  • AngelCam ready camera. Some cameras have built in support for AngelCam, not mine. Next!
  • Providing the camera URL. You can directly provide the URL of the camera to AngelCam. Most likely this won’t be encrypted and may rely on portforwarding and a dynamic DNS service.
  • AngelBox. AngelCam sells AngelBox, a Raspberry Pi loaded with AngelCam’s software. This sets up a secure connection between your cameras and AngelCam’s servers, meaning they don’t need to be publicly accessable.
  • Arrow client. The Arrow client allows you to use your own computer like an AngelBox through the open source code.

As we’ve got a home server, I wanted to use this as the Arrow client. I made a Docker image (jordancrawford/arrow-client) to run the Arrow client easily and on any operating system.

Evaluation

AngelCam meets all my requirements as it provides secure remote access and works on any device. It is simple to access, only requiring visiting the AngelCam website and remembering the username and password, and it provides recording.

AngelCam has worked well over the last few months, and I’m looking forward to seeing their feature set expand over time.

The AngelCam app with the two cameras present.

Please note: I have no affiliation with AngelCam, and I receive no benefit from this post, it’s just simply the solution I discovered.

Edit, 27 Dec 2016: Edited to reflect the fact that AngelCam has now removed their free plan. It’s likely still good value for the convenience, but as we have a home server I’m now considering self-hosted alternatives.

Accessing home services from anywhere, without port forwarding!

This post is part of a series about my journey towards a low cost and low power home server.

Other articles:


My Raspberry Pi is setup as a home server, providing me access and control of my content through several services:

  • A Deluge server for torrent downloads.
  • A Plex server to manage and stream my media collection.
  • A Pydio server for remote access and management of files.

This is great, but I want to access my content when I’m away from home. Previous experience with remote access solutions inspired some requirements:

  1. I should be able to access my services from any computer in the world.
  2. I should be able to punch in an easy to remember web address like deluge.crawford.kiwi. This means all standard HTTP/HTTPS ports should be used!
  3. All transmission of my content should be encrypted.
  4. Moving house or ISP shouldn’t break my remote access.

Problems with home internet connections

A house.

The hardest requirement to cater to is that moving house or ISP shouldn’t break my remote access. Why is this such an issue?

May not have a static IP

Very rarely does a home connection come with a static IP. Getting a static IP typically results in extra charges, and would require updating the DNS record if anything changes. The solution is to use a dynamic DNS service which solves both these issues, but we still end up with the below problems.

May not have control of the router

Port forwarding needs to be setup on your router so incoming connections are forwarded to the home server. However, you may not have full control over the router to setup these rules. Many routers will also take the default HTTP/HTTPS ports for their own services, leaving you with non-standard port numbers for everything else.

May not be able to get incoming connections

Finally, once we’ve sorted out everything above, you may not be able to get incoming traffic to reach your house! As a result of the IPv4 address shortage and ISP firewalls, incoming connections don’t always work. Never fear, we can still tunnel traffic through the internet with the help of another computer with a more accessible connection.

Tunnelling out

A diagram with a Raspberry Pi in the home network connected via a tunnel to the proxy server, allowing access over the internet with a client.

The concept of using a tunnel is pretty simple. We may not be able to get incoming connections to the home server, but the home server can setup an outgoing tunnel connection with some other machine on the internet. This other machine can be accessed from anywhere and forward connections through the tunnel to get to the home server.

The Other Machine

To meet my crazy requirements above, some other machine needs to be involved. There are services dedicated to providing remote access to your networks like Hamachi. These work well but require client software to join the virtual network, not meeting my first requirement of working on any computer!

Other than that, there are virtual private network (VPN) services which provide port forwarding, however it’s unlikely you’d be able to use the HTTP and HTTPS ports.

Last resort, DIY! A virtual private server (VPS) is a cheap way to get a small cloud server with a decant connection and its very own IPv4 address! For this I grabbed a VPS server from Vultr, whose cheapest server has more than enough grunt to provide remote access.

Reverse SSH Tunnel

A common way to get remote access through a firewall is with a Reverse SSH Tunnel. This is easy to setup and works well, but I discovered that HTTP based services through the tunnel run extremely slow. The most likely reason for this is that both SSH and HTTP use the TCP protocol to transmit data over a network. TCP ensures a reliable connection with built-in error checking and transmission control but this comes at a cost of speed. Running HTTP through the SSH tunnel is performing these error checks twice, resulting in much slower speeds.

The answer is to switch to something UDP based. UDP is much faster because it’s just packets sent straight over the network without error checking or flow control. A UDP based tunnelling solution means that only the HTTP layer is performing these extra tasks.

tinc VPN

The tinc VPN software was the answer. tinc can be used to create virtual networks between computers. It utilises UDP so runs quickly, all traffic is encrypted, and it’s continually re-checking the status of its VPN connection so works well even on unreliable connections.

I posted an article about how to configure tinc for this purpose, but this DigitalOcean tutorial is good resource too. This involves setting up the network’s configuration, generating key-pairs and copying the key-pairs between the machine.

Both my VPS and Raspberry Pi run Docker so I created rpi-tinc for the Raspberry Pi and used jenserat’s tinc for the VPS.

Pretty URL’s

Proxying Connections

With tinc working, all the services on the home server can be accessed through a local IP on the VPS, like 10.0.0.2:8112 for Deluge. Time to turn that into something nice like deluge.crawford.kiwi!

The subdomains point to the VPS’s IP address. The VPS has a NGINX server running with the official NGINX Docker image. NGINX is setup as a proxy server to the home server’s IP address using the NGINX documentation, meaning external traffic is forwarded through the tinc link to the home server.

HTTPS

The LetsEncrypt logo.

HTTPS is used to ensure all data transport is encrypted. This requires valid SSL certificates which can be obtained for free with Let’s Encrypt through their automated verification process. To setup Let’s Encrypt to automatically renew I used bringnow’s docker-letsencrypt-manager and shared the volumes with the NGINX container. A very useful tool when working with HTTPS is SSL Labs’ SSL Test.

Using Plex

A house.

Plex is a powerful media server, allowing access to your media from anywhere. Plex offers Relay, a feature that allows tunnelling into your home server in a similar way to my setup. This works well, but does have bandwidth limitations. To setup MyPlex through the tinc tunnel I used this Gist to help with the NGINX configuration, and setup a custom URL in my Plex settings, like: https://plex.crawford.kiwi:443 (the port number must be explicitly defined for it to work). This works well, but I did have difficulties when Chromecasting from Android while on an external network, hopefully Plex will release a fix for this.

Take Away

While my original requirements were pretty over the top, I’m happy I have a solution which satisfies them. I have been using this setup reliably for about three months and having an easy to remember web address for all my home services is great. This process has taught me a lot about Docker, NGINX, and networking, much of which has already become useful in other contexts.

Edit, 29 Oct 2017: Added a link to my new tinc setup guide.