Skip to main content

My setup for downloading & streaming movies and tv

I recently signed up for Netflix and am retiring my headless home media pc. This blog will have to serve as its obituary. The box spent about half of its life running FreeNAS, and half running Archlinux. I’ll briefly talk about my experience with FreeNAS, the migration, and then I’ll get to the robust setup I ended up with.
SilverStone DS380
The machine itself cost around $1000 in 2014. Powered by an AMD A4-7300 3.8GHz cpu with 8GB of memory. A SilverStone DS380 case is both functional, quiet and looks great. The hard drives have been updated over the last two years until it had a full compliment of 6 WD Green 4TiB drives - all spinning bits of metal though.

Initially I had the BSD based FreeNAS operating system installed. I had a single hard drive in its own ZFS pool for TV and Movies, and a second ZFS pool comprised of 5 hard drives for documents and photos.

FreeNAS is straight forward to use and setup, provided you only want to do things supported out of the box or by plugins. Each plugin is installed into its own jail giving you full control over what data is accessible to each jail. There were several things that I really liked about FreeNAS; the web administration interface worked a charm, also had no problems using ZFS. Quite mind blowing taking disks offline and increasing the size of the pool by adding in larger drives. I haven’t used any BSD systems before this so making custom jails, installing custom software, running jails
with a VPN etc were all quite frustrating tasks.

Eventually these differences got too annoying, I decided to use what I knew from working with Docker and use my normal operating system: ArchLinux. I had read a lot about btrfs when I first set up FreeNAS so I was keen to switch filesystems at the same time. The migration was an exercise in care. I had approximately ~6TB of data. The ZFS pool could operate in degraded mode without any two of its disks - so in theory I could make the migration work without using external disks. Long story short I backed everything to two disks, replaced the ZFS pool with a BTRFS volume, and migrated the data. The ability to add additional disks to the live btrfs volume was very impressive. I almost got caught out by not removing the zfs partition labels with wipefs. I’ve been warned that raid5 and raid6 isn’t very well tested in btrfs, but I’ve gotten away with it so far.

I had these services running on both operating systems:
  • Emby for streaming content to chromecast and other devices
  • Couchpotato for finding movie torrents
  • Sickrage for finding tv show torrents
  • Headphones for finding music torrents
  • Transmission for downloading the torrents
  • OpenVPN for basic hiding of downloading activity
  • Nginx webserver & reverse proxy to help access the above services and host local content
The arch setup had:
  • systemd controlled docker containers for each service
  • btrfs subvolumes created for each docker data volume
  • anything that even thinks about torrents accessing the internet via VPN
With systemd it is very easy to set up dependencies between services, and with docker it is easy to link containers together. I’ve had a bit of experience running various Docker containers under CoreOS so it wasn’t much effort to get these services running under systemd.

They all follow the same general template:

[Unit]
Description=Some Dockerized Service

After=docker.service
Requires=docker.service

[Service]
Restart=always
RestartSec=60
TimeoutStartSec=0

ExecStartPre=-/usr/bin/docker kill container-name
ExecStartPre=-/usr/bin/docker rm container-name
ExecStartPre=/usr/bin/docker pull user/upstream-container-name

ExecStart=/usr/bin/docker run --net=host --rm \
    -e TZ="Australia/Sydney" \
    -v /mnt/drive/server-configs/container-name:/config \
    -v /mnt/drive/Video:/media \
    -v /mnt/drive/Music:/music \
    -v /mnt/drive/Downloads:/downloads \
    --name=container-name \
    user/upstream-container-name

ExecStop=/usr/bin/docker stop -t 2 container-name

[Install]
WantedBy=multi-user.target
  • I’m directly mounting volumes from /mnt/drive
  • This setup carries out updates on restart, a more robust approach would be to pin the container version.
  • Note directives with =- are allowed to fail without consequence.
  • This template doesn’t link to any other docker containers.
An example where one service depends on another is transmission depending on the VPN:

[Unit]
Description=Transmission Server
# https://github.com/dperson/transmission

After=vpn.service
Requires=vpn.service

...

ExecStart=/usr/bin/docker run \
    --net=container:vpn \
    ...

Management is all done with the systemctl tools. I’ve enabled all the services
to start at boot.

Since each service is really just running a server inside docker, these are the containers I settled on:
The nginx proxy is a nice addition because it allows you to decide how to expose each service. Instead of having different ports for each service (port 8096 for emby, 9091 for transmission etc), you can instead visit http://your-machine/emby or http://your-machine/transmission. I set up a simple home page that nginx was serving to point users at the correct services.




I setup the mount points so that each service could only access what it needs to access. For example transmission can only write files in two download directories - one for music, and one for tv/movies. (Actually that isn’t 100% accurate - in order to not lose in-progress downloads when the docker container is restarted the incomplete downloads folder is also a mount point).


You can see the service files for each container on github at https://github.com/hardbyte/home-media-setup

Popular posts from this blog

Bluetooth with Python 3.3

Since about version 3.3 Python supports Bluetooth sockets natively. To put this to the test I got hold of an iRacer from sparkfun . To send to New Zealand the cost was $60. The toy has an on-board Bluetooth radio that supports the RFCOMM transport protocol. The drive  protocol is dead easy, you send single byte instructions when a direction or speed change is required. The bytes are broken into two nibbles:  0xXY  where X is the direction and Y is the speed. For example the byte 0x16 means forwards at mid-speed. I was surprised to note the car continues carrying out the last given demand! I let pairing get dealt with by the operating system. The code to create a  Car object that is drivable over Bluetooth is very straight forward in pure Python: import socket import time class BluetoothCar : def __init__ ( self , mac_address = "00:12:05:09:98:36" ): self . socket = socket . socket ( socket . AF_BLUETOOTH , socket . SOCK_STREAM , socket .

Matplotlib in Django

The official django tutorial is very good, it stops short of displaying data with matplotlib - which could be very handy for dsp or automated testing. This is an extension to the tutorial. So first you must do the official tutorial! Complete the tutorial (as of writing this up to part 4). Adding an image to a view To start with we will take a static image from the hard drive and display it on the polls index page. Usually if it really is a static image this would be managed by the webserver eg apache. For introduction purposes we will get django to serve the static image. To do this we first need to change the template. Change the template At the moment poll_list.html probably looks something like this: <h1>Django test app - Polls</h1> {% if object_list %} <ul> {% for object in object_list %} <li><a href="/polls/{{object.id}}">{{ object.question }}</a></li> {% endfor %} </ul> {% else %} <p>No polls

Python and Gmail with IMAP

Today I had to automatically access my Gmail inbox from Python. I needed the ability to get an unread email count, the subjects of those unread emails and then download them. I found a Gmail.py library on sourceforge, but it actually opened the normal gmail webpage and site scraped the info. I wanted something much faster, luckily gmail can now be accessed with both pop and imap. After a tiny amount of research I decided imap was the better albiet slightly more difficult protocol. Enabling imap in gmail is straight forward, it was under labs. The address for gmail's imap server is: imap.gmail.com:993 Python has a library module called imaplib , we will make heavy use of that to access our emails. I'm going to assume that we have already defined two globals - username and password. To connect and login to the gmail server and select the inbox we can do: import imaplib imap_server = imaplib . IMAP4_SSL ( "imap.gmail.com" , 993 ) imap_server . login ( use