I have a Raspberry Pi home server that I can remotely access through a tinc VPN tunnel with my VPS. Most services can be accessed through the tunnel with addresses like plex.crawford.kiwi, but some of them are only available on my local network, for example, SMB and SSH. Accessing these services from the local network is actually more difficult than the services available remotely for two reasons:
The local IP address may change. Obviously you can set up a MAC address based IP reservation on most routers, but one of my original goals for this system was that it should be able to work even in situations where I don’t have control of the router.
Differences between networks. The local address space (e.g.: 192.168.1.xx) varies between networks and the address you want may already be taken. As a result, changing networks means changing your computers configuration to access the Pi at it’s new address.
IP addresses are hard to remember. There is a reason DNS was invented! DNS allows a friendly URL to map to the underlying IP address.
My solution is to run a DNS server on my VPS that fetches the IP address from the Pi through the VPN tunnel when a request is made. This means to access my Pi through the local network, I just punch in an address like local.pi.crawford.kiwi and I can access it no matter what it’s local IP address is!
The DNS server that runs on my VPS server is called local-address-dns. This runs a DNS server using dnsd and upon an incoming DNS request it connects to local-address-dns-client-rpi running on my Pi. local-address-dns-client-rpi is a simple web server that returns the Pi’s IP address on one of it’s network interfaces. An NS record on my domain points addresses like local.pi.crawford.kiwi to my VPS to be handled by local-address-dns.
Connecting to SMB on the Pi. I don't ever need to worry about what the Pi's IP address is!
My parents just got two IP cameras. These are cameras that connect to your home network to allow access over a computer network.
I had some requirements to make the IP cameras useful:
Remote Access. Obviously, an IP camera that can only be accessed from the local network isn’t very useful if you want to check up on the home! The cameras should be accessable from anywhere in the world via an internet connected device.
Compatibility. The IP camera’s built-in web interface requires a browser plugin. I couldn’t even get this going on my computer, let alone a mobile device. Accessing the cameras should be possible from any web browser and mobile devices.
Recording. The footage from the cameras should be recorded. Obviously, these cameras aren’t being watched 24/7 so their footage should be able to be retreived at a later point if a need arises.
Simplicity. Last, but not least, access to the cameras should be simple. My way of judging this is that I should be able to get into the cameras on any computer without needing a bookmark or note.
Finding a solution
I discovered that the cameras we have run a Real Time Streaming Protocol (RTSP) server so the awful web interface is optional. Interestingly, despite the web interface being password protected, the RTSP stream is wide open!
My first thought was to build a web interface of my own that provides security and HTML5 support by transcoding the camera stream. This could have been a fair bit of work and doesn’t address the recording requirement.
However, I then discovered AngelCam. This service connects to your IP cameras and lets you access them from a browser or their iOS/Android app. In addition to meeting my requirement of recording, this also means all the recordings are safe off-site, making it difficult to wipe the footage.
A screenshot of the AngelCam recordings page. The footage preview has been pixelated intentionally.
There are also plenty of other optional features like public broadcasts, timelapses, and a few more features in the works, like licence plate recognition or alerts when a line is crossed.
Setting up AngelCam
There are a few options to connect cameras to AngelCam.
AngelCam ready camera. Some cameras have built in support for AngelCam, not mine. Next!
Providing the camera URL. You can directly provide the URL of the camera to AngelCam. Most likely this won’t be encrypted and may rely on portforwarding and a dynamic DNS service.
AngelBox. AngelCam sells AngelBox, a Raspberry Pi loaded with AngelCam’s software. This sets up a secure connection between your cameras and AngelCam’s servers, meaning they don’t need to be publicly accessable.
Arrow client. The Arrow client allows you to use your own computer like an AngelBox through the open source code.
As we’ve got a home server, I wanted to use this as the Arrow client. I made a Docker image (jordancrawford/arrow-client) to run the Arrow client easily and on any operating system.
AngelCam meets all my requirements as it provides secure remote access and works on any device. It is simple to access, only requiring visiting the AngelCam website and remembering the username and password, and it provides recording.
AngelCam has worked well over the last few months, and I’m looking forward to seeing their feature set expand over time.
Please note: I have no affiliation with AngelCam, and I receive no benefit from this post, it’s just simply the solution I discovered.
Edit, 27 Dec 2016: Edited to reflect the fact that AngelCam has now removed their free plan. It’s likely still good value for the convenience, but as we have a home server I’m now considering self-hosted alternatives.
A Plex server to manage and stream my media collection.
A Pydio server for remote access and management of files.
This is great, but I want to access my content when I’m away from home. Previous experience with remote access solutions inspired some requirements:
I should be able to access my services from any computer in the world.
I should be able to punch in an easy to remember web address like deluge.crawford.kiwi. This means all standard HTTP/HTTPS ports should be used!
All transmission of my content should be encrypted.
Moving house or ISP shouldn’t break my remote access.
Problems with home internet connections
The hardest requirement to cater to is that moving house or ISP shouldn’t break my remote access. Why is this such an issue?
May not have a static IP
Very rarely does a home connection come with a static IP. Getting a static IP typically results in extra charges, and would require updating the DNS record if anything changes. The solution is to use a dynamic DNS service which solves both these issues, but we still end up with the below problems.
May not have control of the router
Port forwarding needs to be setup on your router so incoming connections are forwarded to the home server. However, you may not have full control over the router to setup these rules. Many routers will also take the default HTTP/HTTPS ports for their own services, leaving you with non-standard port numbers for everything else.
May not be able to get incoming connections
Finally, once we’ve sorted out everything above, you may not be able to get incoming traffic to reach your house! As a result of the IPv4 address shortage and ISP firewalls, incoming connections don’t always work. Never fear, we can still tunnel traffic through the internet with the help of another computer with a more accessible connection.
The concept of using a tunnel is pretty simple. We may not be able to get incoming connections to the home server, but the home server can setup an outgoing tunnel connection with some other machine on the internet. This other machine can be accessed from anywhere and forward connections through the tunnel to get to the home server.
The Other Machine
To meet my crazy requirements above, some other machine needs to be involved. There are services dedicated to providing remote access to your networks like Hamachi. These work well but require client software to join the virtual network, not meeting my first requirement of working on any computer!
Other than that, there are virtual private network (VPN) services which provide port forwarding, however it’s unlikely you’d be able to use the HTTP and HTTPS ports.
Last resort, DIY! A virtual private server (VPS) is a cheap way to get a small cloud server with a decant connection and its very own IPv4 address! For this I grabbed a VPS server from Vultr, whose cheapest server has more than enough grunt to provide remote access.
Reverse SSH Tunnel
A common way to get remote access through a firewall is with a Reverse SSH Tunnel. This is easy to setup and works well, but I discovered that HTTP based services through the tunnel run extremely slow. The most likely reason for this is that both SSH and HTTP use the TCP protocol to transmit data over a network. TCP ensures a reliable connection with built-in error checking and transmission control but this comes at a cost of speed. Running HTTP through the SSH tunnel is performing these error checks twice, resulting in much slower speeds.
The answer is to switch to something UDP based. UDP is much faster because it’s just packets sent straight over the network without error checking or flow control. A UDP based tunnelling solution means that only the HTTP layer is performing these extra tasks.
The tinc VPN software was the answer. tinc can be used to create virtual networks between computers. It utilises UDP so runs quickly, all traffic is encrypted, and it’s continually re-checking the status of its VPN connection so works well even on unreliable connections.
With tinc working, all the services on the home server can be accessed through a local IP on the VPS, like 10.0.0.2:8112 for Deluge. Time to turn that into something nice like deluge.crawford.kiwi!
The subdomains point to the VPS’s IP address. The VPS has a NGINX server running with the official NGINX Docker image. NGINX is setup as a proxy server to the home server’s IP address using the NGINX documentation, meaning external traffic is forwarded through the tinc link to the home server.
HTTPS is used to ensure all data transport is encrypted. This requires valid SSL certificates which can be obtained for free with Let’s Encrypt through their automated verification process. To setup Let’s Encrypt to automatically renew I used bringnow’s docker-letsencrypt-manager and shared the volumes with the NGINX container. A very useful tool when working with HTTPS is SSL Labs’ SSL Test.
Plex is a powerful media server, allowing access to your media from anywhere. Plex offers Relay, a feature that allows tunnelling into your home server in a similar way to my setup. This works well, but does have bandwidth limitations. To setup MyPlex through the tinc tunnel I used this Gist to help with the NGINX configuration, and setup a custom URL in my Plex settings, like: https://plex.crawford.kiwi:443 (the port number must be explicitly defined for it to work). This works well, but I did have difficulties when Chromecasting from Android while on an external network, hopefully Plex will release a fix for this.
While my original requirements were pretty over the top, I’m happy I have a solution which satisfies them. I have been using this setup reliably for about three months and having an easy to remember web address for all my home services is great. This process has taught me a lot about Docker, NGINX, and networking, much of which has already become useful in other contexts.
Edit, 29 Oct 2017: Added a link to my new tinc setup guide.
I used Docker to manage the services on the Raspberry Pi. Docker containerises services allowing them to be shared and scaled easily. For me, the main benefit was that setting up a service is as easy as downloading a Docker image rather than manually installing packages and making configuration changes.
An image is the starting point to setup a service. These typically come from Docker’s repository for images called Docker Hub, but failing that you can always setup an image of your own. An image includes the operating system and all other software and configuration needed to run the service.
A container is an instance of an image. Each runs independently with individual file systems and environment variables. You can run multipule containers for the same service without them clashing.
While this sounds like a performance nightmare, Docker doesn’t actually run containers as virtual machines but they instead share the same kernel. This means that even on a little device like the Raspberry Pi you can still run plenty of containers at once, even hundreds.
Running Docker on the Raspberry Pi
The Docker community for systems with ARM processors is growing. A big name in this space is Hypriot with HypriotOS. This is a lightweight operating system for your Raspberry Pi with Docker built in.
Only Docker images built specifically for ARM processor’s will work on the Pi. Usually adding rpi as a search term in Docker Hub helps find compatible images.
Setting up an external hard drive
I chose the EXT4 file system as this doesn’t require additional software to be installed on the Pi and compatibility with other systems isn’t a major requirement for me. I setup the system to auto mount the drive using it’s UUID and setup the drive to automatically go to sleep using hd-idle.
Managing service configuration
I used Docker Compose to manage services on the Pi. Docker Compose works as a layer on top of the Docker command line meaning you can start and manage the entire stack with a single command.
The Docker Compose file containing the services stack can be stored in Git so you always have access to your working configuration. Re-installing is easy, just clone the repository and run docker-compose up -d, then wait as the images download.
Plex is a server for your personal media collection. It collects brilliant metadata about your content and allows it to be played pretty much anywhere with the web, desktop, TV and mobile apps.
To get Plex running I used greensheep/plex-server-docker-rpi.
Plex runs well, and will transcode most files for use in the Plex web-app or Rasplex, but some formats have issues like H.265 files. Everything works well on a more capable client like a computer or phone.
Deluge is a torrent download server. The best thing about Deluge is how client-server oriented it is. You can connect to a Deluge server through the desktop app or the web interface. Torrent is great from the Raspberry Pi because it’s always wired into the network and is on 24/7.
The drive access speeds aren’t amazing through Samba, but I’m hoping this will improve with the Pine 64’s gigabit ethernet and faster CPU.
Getting remote access?
Services can be accessed over the local network at a particular port or port forwarded to the internet, but a domain like plex.example.com beats 123.456.789.123:2016 anyday!
My next post covers how I got remote access to my Pi without any port forwarding or a static IP.
I’ve learnt a few things from this experience:
Docker is simple to use once an image has been made, but making an image of your own can take much longer than you might expect.
Directly connected storage is significantly better. Initially, the Raspberry Pi was connected to my old Time Capsule unit using Docker Volume Netshare. This worked well but in the end is a much more complicated solution than required.
Have a good look through Docker Hub before creating a Docker image. There are plenty of images out there you just need to look hard, and of course if you do implement your own service, please release it for everyone to use!
Over the summer of 2014 I decided to enhance our home theatre PC’s with a launcher that simplifies the use of media player apps and web content, while saving time and energy.
The world of home entertainment is changing rapidly, especially with the advent of connected devices like the Apple TV, Roku and other ‘smart TV’ solutions combined with the ever-changing world of cloud services.
However, for now, the best all-round solution for home entertainment remains to be home theatre PC’s. At home we’ve got two HTPC’s and a server. I’ll get to the server another day, but for us the best solution is a combination of three programs:
Kodi is actually capable of doing it all, using the PleXBMC and MediaPortal addon, but you really can’t beat the native experience of each app.
Without a mouse and keyboard, it’s hard to switch between all these apps. Also, waiting for each one to start up every time becomes frustrating.
This led to the following requirements:
Ability to switch between applications using a remote
Switching between applications should be fast
Applications should be able to be added easily
A user interface that be used for other helpful information
The project was split into two parts. The frontend is all about giving a responsive and clear interface for the user, while the backend manages the applications and sends commands to the operating system.
I started out with a simple Window’s Forms interface made up of a few circles. I quickly found that Windows Forms are limited in the visual effects they offer and aren’t very easy to work with.
The solution was to embed a web interface into the app. Why a web interface? You can make just about anything in CSS, the interface can be animated and used visual effects like shadows and rounding, and there is plenty of resources and frameworks available to make the job easier.
Also, because the web interface sits on the home server it is very easy to update the available applications rather than recompiling and redistributing.
Tools of the Trade
Over summer I was working on another project in AngularJS so I decided I’d build this in Angular too. Angular makes it easy to ensure the interface is always up to date with the model.
Windows Form’s has a built in web browser but it’s based on Internet Explorer. Enough said, it was time to embed Chrome. CEF is an embeddable version of Chrome, and CefSharp allows CEF to be embedded within a C# application!
The backend’s job is to manage applications based on commands from the frontend.
Language of Choice
Our HTPC’s both run Windows 7 and I’ve used C# before, so this was my language of choice.
Having just completed an object oriented programming paper at university I was keen to put this to use with some design patterns.
Without these design patterns, my source code would become a mess and any future changes would become painful.
A Facade is used to handle interactions between other classes, acting as the one class every other class can talk through. For example, if the FrontendBridge gets a message from the frontend to open an application, the facade finds the application and sends it the open message.
State ensures that the state of an application doesn’t change the way we interact with it. Other classes have idea what state an application is in, they just send it commands. This makes implementing new states easy, and saves plenty of ‘if’ statements.
This is used by storing an ApplicationInstance in LaunchableApplication, and a method to change the current application instance (for state transitions).
Singleton’s are very necessary for implementing a Facade or the FrontendBridge (things there are only ever going to be one of). A singleton is a static method that initialises an instance of a class or just returns the single instance of the controller if an instance already exists.
One of the most important aspects of this project is to be able to switch between applications quickly. The computers I was using aren’t particularly fast, so waiting 20 seconds for an application to launch gets annoying!
One option is to just keep all applications open which would work fine on some computers, but this would have a nasty effect on these machines. These computers are also left on all day and night, so reducing the amount of CPU will be better for my parents power bill!
The best solution for me was a command line utility called PsSuspend. Use of this is very simple, just launch PsSuspend.exe with the process ID to suspend, and add the -r flag to resume.
PsSuspend completely freezes an application so that it uses 0% CPU, but it remains completely open in memory.
Resuming a process brings it back to life instantly. I wired up the PsSuspend command up to to the close() method of my OpenApplication state (switches to the suspended state), and to resume when the SuspendedApplication state receives a launch().
Moving an application (or the launcher) to the front of the screen turned out to be harder than I expected.
For the longest time I was having an issue where an application could be launched fine, but would not return to the launcher if the application had received any keypresses. Even weirder was this only happened on the HTPC’s but not in my development environment. I even asked about the issue I was having on Stack Overflow, but gladly found the answer myself in the end.
The difference was that I was running with debugging while the other machines weren’t. The breakthrough came from reading a tutorial on CodeProject which explains the rules for getting focus from another window. If an application is being debugged then it is allowed to switch back to itself. The solution was in a blog article which avoided the limitations by programmatically pressing the ‘alt’ key.
Catching remote button presses
I had trouble catching keyboard events while the launcher was not active, plus I needed support for a specific button on the remote. As a result, the easiest way was to setup EventGhost to make a UDP request to the launcher.
Using EventGhost gives good support for most remotes, and the fact that it’s UDP means a smartphone or other device could also trigger the launcher if needed.
Extensions and Further Development
Osprey Launcher continues to be used by my parents. Eventually I intend to prepare it for open source release, as well as adding a few extra features.
It’s pretty neat to see how new ideas evolve as software is used, and this feature is a great example.
Originally, CEFSharp was just used to display the frontend. However, with a little modification websites were treated as launchable items, giving many extra uses for the launcher.
Something I really like about the launcher is that it simplifies the whole experience of using the HTPC by hiding out the complications of the operating system behind it.
The Desktop Mode adds to this experience, by treating the operating system below like an application. Apps open in the launcher are completely hidden, even from the Windows taskbar making for a pure Window’s experience while in desktop mode.
Things don’t always work perfectly, especially on these slow old computers. So task manager doesn’t need to be launched, I added a ‘Force Close’ option in the context menu of a launchable application.
Automatically return to launcher
At most, the computer would only be used a few hours a day. If an application is left open, this can keep the fan on all day until the computer is used next.
To keep CPU cycles low, the launcher will pull out of an application after a few hours.
The launcher provides a nice way to integrate the HTPC’s with information from the server. I made a few AngularJS directives to provide this information.
Number of recordings on MediaPortal today
The MediaPortal install on the home server has MPExtended installed, providing an web API into the MediaPortal database.
This displays the available space percentage shown from a PHP script on the server. This one is handy with a home server setup like this, as the server can easily go months without anyone logging into it.
As well as creating something neat, this project has been a major learning experience, giving me some insights:
Spending time to think about the best class structure is worth it in the end
User input provides new ideas and changes, so get it into the hands of users as soon as possible
Don’t re-invent the wheel, there are plenty of frameworks and tools available to save time