In November of last year I finished my Software Engineering degree and moved to Wellington to start work at Powershop. Powershop is an energy retailer in New Zealand, Australia and the United Kingdom. I work as part of the team that builds the Rails app that brings the power to the people.

The Powershop logo.

My first few months were spent in training on what Powershop calls the Dev Train.

The Dev Train

I had never done anything with Ruby before, so it started with building a command line game of Hangman in Ruby, followed by a Rails version.

A screenshot of my command line Hangman game.
My first Ruby program, command line Hangman!

While building these myself and the other Dev Trainees had code review sessions with Dev Train mentors. These sessions taught us how to best design software systems and write clean, maintainable code.

Next, I made a Rails version of the Ticket to Ride boardgame. I approached this using test driven development (TDD). This basically means you write some tests for new features, then write the code that makes the tests pass, then refactor the code and tests.

Testing!

For me, one of the biggest take aways of the Dev Train was that testing is important! Yes, tests can take a lot of time to write, but especially in a large, complex codebase they become absolutely essential to ensure everything is still ticking along as it should.

For Ticket to Ride I wrote unit tests in RSpec and integration tests with Cucumber.

RSpec

The RSpec logo.

RSpec allowed me to test the public methods in my controllers, models and services. I found the ability to mock out dependant services incredibly useful. This ensures your specs are only concerned with the code under test, rather than all it’s dependencies. Mocking means that rather than spending time setting up situations where a depdendant service will return a particular result, you can just mock it to return a particular result and make assertions about how the code under test should respond.

For example, I have a ClaimRouteController which calls a ClaimRoute service. In my specs for ClaimRouteController I mock out ClaimRoute and assert that when ClaimRoute returns errors that these errors will be added to the flash to be displayed to the user. This means I don’t have to do the work of setting up a situation where ClaimRoute will fail when all I care about is that the controller handles errors appropriately.

Cucumber

The Cucumber logo.

Cucumber allows you to automate the process of a user testing out all the features of your application in a browser. A Cucumber feature consists of a series of steps to run the test.

A Cucumber feature is pretty easy to read. For example, as below, the Claim Route feature sets up a game with some train pieces and train cars, then proceeds through the steps to claim a route between two cities. This feature is successful if after doing this the list of routes shows this route as being claimed by the player.

Each of these steps has a corresponding step definiton. Step definitions are the code behind the scenarios that executes the step. You can use regular expressions in the names of steps to pass parameters to step definitions, keeping steps sounding natural and reusable.

My Ticket to Ride Implementation

It’s fair to say I went overboard on testing my Ticket to Ride game. Despite having ~250 RSpec examples and a further 12 Cucumber scenarios, my game only got as far as allowing users to:

  • Setup a game.
  • Draw additional train cars.
  • Spend train cars and train pieces to claim a route.

Either way, you can checkout the code on GitHub and some screenshots of the app below.

Game Board

A screenshot of my Rails Ticket to Ride game's board.

Claiming a route

A screenshot of my Rails Ticket to Ride game's claim a route page.

An Awesome Learning Culture

I’m now finished the Dev Train and onto ‘real work’ now, but that doesn’t mean the learning stops! With the tech industry moving so fast there is always something new to learn. Powershop has an awesome learning culture; each week a few members of the crew give a talk about something that interests them, and 10% of our time goes towards learning new things with Hackdays projects.

I’m really enjoying my time at Powershop and I’m looking forward to seeing what’s in store for the next few years!

Last weekend I wanted to see the latest movie trailers. Of course, there are plenty of websites out there for this, however, none of them meet the excitement of watching movie trailers before a film at the cinemas.

Seats at a cinema.

So thats what I set out to build. First, I needed to find a source for movie information. I don’t want to populate this information manually, so I found TheMovieDB has an API. This site is great because there’s a whole community of people dedicated to keeping this information up to date.

The TheMovieDB logo.

I linked in with their API and pulled down details of upcoming and now showing movies. TheMovieDB’s API lets me get a list of movies with one request, however it requires another request to fetch the videos (including trailers) for each movie. TheMovieDB has an API limit of 40 requests every 10 seconds, and if my site got popular it could exceed this and I don’t want to give out my TMDB key to any visitor!

Instead, I setup my VPS with a nightly job that fetches data from TheMovieDB and saves it to an Amazon S3 bucket. This job delays it’s requests to avoid the TMDB API limit and keeps only the movie ID and YouTube trailer ID’s. This method saves bandwidth and time, as visitors to LatestTrailers.net only need to make one request to fetch all the data they need! As the entire site is purely static, everything can be hosted on Amazon S3 for simplicity and scalability.

Next, I used the YouTube Player API to embed trailers. This let me hook into events like when the video has finished so that I can start the next one playing. I was suprised at the level of control they give embedders, allowing me to prevent annotations, video controls and other distractions for a pure video experience.

The LatestTrailers.net interface while playing a trailer.

A core feature of the site is that it only plays trailers you haven’t seen. To keep things simple, I used local storage in the browser to store the list of movies the user has seen. When the user watches all the available trailers, they can clear out their list of seen movies to start over!

The LatestTrailers.net interface when a user has watched all available trailers and has the option to start over.

This simple project took longer than I expected but I’m pleased with the result, and it’s where I’ll go to get my trailers from now on!

Check it out at LatestTrailers.net!

For a few years now I’ve been wanting to organise a family photo and video archive. Digital content is much easier to deal with, but my family members still have boxes of old photos sitting around in their cupboards. In our case, this is basically everything prior to the year 2000.

Capturing the content

A stack of photos.

These physical photos degrade over time and are vulnerable to being destroyed in a disaster. This is an easy problem to solve by scanning the photos so we have a digital copy.

But the photos alone are worthless. It’s the associated memories that make them valuable. In an archive then, it’s essential that the who, what, where and when of each memory is identified. This is difficult, and even more time consuming than the digitisation process, but I think it’s worth it! Whats the point in having a picture if you don’t know what its about?

Making the content accessable

A diagram that represents sharing.

Capturing the content is time consuming but relatively simple. But why bother doing this if no one can enjoy it? Making the archive accessable to the whole family is arguably the most important part, and it’s currently proving to be the most challenging.

This solution must:

  1. Make sharing and accessing content with the entire family easy, ideally with a web link.
  2. Be able to show photos alongside their descriptions and other metadata.
  3. Allow content to be structured in a way that makes sense for families, like filtering by a branch of the family or a type of event.
  4. Be able to contain videos, some of which may be long (potentially several hours worth of home videos!)
  5. Ensure I retain control of the data to so it will be in a usable form years into the future.
  6. Be secure so only my family members have access.

Flickr came close to this with their “family” and “friends” sharable URL’s, but this meant loosing full control of the data and made finding content difficult.

Let me know if you have any suggestions of solutions, but at this stage I’m looking at building a system of my own that can fufil these requirements.

In the 21st century we have an amazing opportunity to preserve these memories effectively. It will be amazing when future generations can so easilly look back at the past. Time is of the essence so get out there and ensure these memories are captured properley!


I wrote a poem as inspiration, enjoy!

My colour fades as I’m forgotten.
They used to visit me, smiling as we reminisced.
But its been many moons since I’ve been missed.

I anxiously wait for my eventual doom.
Sitting here, in this small dark room.
Have I slipped their mind?
Will they leave me behind?

The bright light hits.

My time has finally come.
But don’t be sad, this is the best possible outcome.
Released from my fragile form, I flutter to the clouds.

This is my next chapter.
A digital life is the answer.
In my binary form, I will live forever on.

I have a Raspberry Pi home server that I can remotely access through a tinc VPN tunnel with my VPS. Most services can be accessed through the tunnel with addresses like plex.crawford.kiwi, but some of them are only available on my local network, for example, SMB and SSH. Accessing these services from the local network is actually more difficult than the services available remotely for two reasons:

  1. The local IP address may change. Obviously you can set up a MAC address based IP reservation on most routers, but one of my original goals for this system was that it should be able to work even in situations where I don’t have control of the router.

    A representation of local IP addresses for a Raspberry Pi being changed.
  2. Differences between networks. The local address space (e.g.: 192.168.1.xx) varies between networks and the address you want may already be taken. As a result, changing networks means changing your computers configuration to access the Pi at it’s new address.

    Three routers all representing different networks and types of addresses.
  3. IP addresses are hard to remember. There is a reason DNS was invented! DNS allows a friendly URL to map to the underlying IP address.

    192.168.1.?

My solution is to run a DNS server on my VPS that fetches the IP address from the Pi through the VPN tunnel when a request is made. This means to access my Pi through the local network, I just punch in an address like local.pi.crawford.kiwi and I can access it no matter what it’s local IP address is!

A diagram showing the information flow between the computer, local address DNS server and Raspberry Pi.

The DNS server that runs on my VPS server is called local-address-dns. This runs a DNS server using dnsd and upon an incoming DNS request it connects to local-address-dns-client-rpi running on my Pi. local-address-dns-client-rpi is a simple web server that returns the Pi’s IP address on one of it’s network interfaces. An NS record on my domain points addresses like local.pi.crawford.kiwi to my VPS to be handled by local-address-dns.

A connection dialog with the local.pi.crawford.ord.nz address entered.
Connecting to SMB on the Pi. I don't ever need to worry about what the Pi's IP address is!

My parents just got two IP cameras. These are cameras that connect to your home network to allow access over a computer network.

An IP camera.

I had some requirements to make the IP cameras useful:

  1. Remote Access. Obviously, an IP camera that can only be accessed from the local network isn’t very useful if you want to check up on the home! The cameras should be accessable from anywhere in the world via an internet connected device.

  2. Security. The internet is full of stories of the dangers of unsecure IP cameras, such as the website allowing public access to thousands of them. I want all access to the cameras to be encrypted and require authentication.

    A diagram of a secure connection to a camera.
  3. Compatibility. The IP camera’s built-in web interface requires a browser plugin. I couldn’t even get this going on my computer, let alone a mobile device. Accessing the cameras should be possible from any web browser and mobile devices.

  4. Recording. The footage from the cameras should be recorded. Obviously, these cameras aren’t being watched 24/7 so their footage should be able to be retreived at a later point if a need arises.

    A recording icon.
  5. Simplicity. Last, but not least, access to the cameras should be simple. My way of judging this is that I should be able to get into the cameras on any computer without needing a bookmark or note.

Finding a solution

I discovered that the cameras we have run a Real Time Streaming Protocol (RTSP) server so the awful web interface is optional. Interestingly, despite the web interface being password protected, the RTSP stream is wide open!

My first thought was to build a web interface of my own that provides security and HTML5 support by transcoding the camera stream. This could have been a fair bit of work and doesn’t address the recording requirement.

However, I then discovered AngelCam. This service connects to your IP cameras and lets you access them from a browser or their iOS/Android app. In addition to meeting my requirement of recording, this also means all the recordings are safe off-site, making it difficult to wipe the footage.

A censored image of the cloud recordings page.
A screenshot of the AngelCam recordings page. The footage preview has been pixelated intentionally.

There are also plenty of other optional features like public broadcasts, timelapses, and a few more features in the works, like licence plate recognition or alerts when a line is crossed.

Setting up AngelCam

There are a few options to connect cameras to AngelCam.

  • AngelCam ready camera. Some cameras have built in support for AngelCam, not mine. Next!
  • Providing the camera URL. You can directly provide the URL of the camera to AngelCam. Most likely this won’t be encrypted and may rely on portforwarding and a dynamic DNS service.
  • AngelBox. AngelCam sells AngelBox, a Raspberry Pi loaded with AngelCam’s software. This sets up a secure connection between your cameras and AngelCam’s servers, meaning they don’t need to be publicly accessable.
  • Arrow client. The Arrow client allows you to use your own computer like an AngelBox through the open source code.

As we’ve got a home server, I wanted to use this as the Arrow client. I made a Docker image (jordancrawford/arrow-client) to run the Arrow client easily and on any operating system.

Evaluation

AngelCam meets all my requirements as it provides secure remote access and works on any device. It is simple to access, only requiring visiting the AngelCam website and remembering the username and password, and it provides recording.

AngelCam has worked well over the last few months, and I’m looking forward to seeing their feature set expand over time.

The AngelCam app with the two cameras present.

Please note: I have no affiliation with AngelCam, and I receive no benefit from this post, it’s just simply the solution I discovered.

Edit, 27 Dec 2016: Edited to reflect the fact that AngelCam has now removed their free plan. It’s likely still good value for the convenience, but as we have a home server I’m now considering self-hosted alternatives.