Welcome back, mainly to me, hah.

It’s been almost two years since my last post, but I have made sure to keep the blog running in case the information on it proves useful to people. I have even used it to recall a few old scripts here and there, so I’ll be sure to keep it around. I still have the problem of not wanting to document things as I finish them, which is terrible when you want to run a blog, but we can play a little catch up here.

In the past two years, here’s the projects I can recall dealing with:

The HTPC Evolution

If you recall the HTPC post, I have now changed it around. Instead, I started getting into the single board/minimal computing movement. First, like many others, I got myself a Raspberry Pi B. I also purchased an APU1 C4 unit that I used, and still use, as my primary household firewall.  The HTPC was then moved to my basement and replaced my 1U Rack Mount server as my primary/only server box.  This helped save money on the power bill, for whatever that’s worth.

The Raspberry Pi B worked great as a Media Server for quite some time, but it’s power just wasn’t quite there.  I had to use the stock interface, the majority of the aftermarket skins were just too intense for the anemic Pi’s processor.  I also had to manually mount NFS shares via script, mainly the options that I found here.  This worked great, but ultimately I wanted more, and I also like to change things randomly because I get bored.

After the Pi, I went and purchased an ASUS Chromebox M004U.  I also included 2 4GB memory upgrades for a total of 8GB.  I followed all the steps in the well written wiki’s to root this device and sideload OpenELEC.  This became my HTPC of choice until very recently.  I had virtually zero issues using this system to stream 1080P content and bitstream DTS-HD audio into the 20-30GB file range through my network (and the above APU1 firewall sitting between my DMZ and LAN).  It’s a flawless solution, if a touch expensive, and maybe a tad *too* much power for this purpose.  It even grew with me when I updated to a 4k TV, I was able to select 4K output in Kodi with this device.  I have since re-purposed this machine as described below in the server experiment.

For now, I have circled back to using the Pi.  The Pi really is a great HTPC for typical stuff, however it’s inability to play H.265 content and maximum 1080P resolution is not going to work.  I am anxiously awaiting a proper OpenELEC build for the newly released Odroid-C2.  I could upgrade to the Pi’s younger, but better, brother the Pi 3 – but I want 4K resolution and gigabit ethernet for my upgrade.  The C2 has both, but unfortunately due to the more restrictive/closed architecture as opposed to the Pi’s completely open, there is some issues with video on the C2 that the Odroid guys are working on.  You can follow the “official” thread here.

As you can see, there really wasn’t much above that warranted a blog as all of it was pretty much already documented and available on google.  Nothing too impressive.

The Server Evolution

side_view

I have, for quite some time, run my own servers.  Usually, I’ve done this out of my own home.  I started years ago (2002 or so), and have held the LinuxNiche domain since approximately 2005.  For the first several years my servers were run on just whatever hardware I (or my father, he gave me lots of equipment that he got from his job when they were done with them) had lying around.  This ranged from simple, old, 486 machines to, at one point, a full 6 laptops clustered using OpenMOSIX with custom kernels.  That was a fun project, but pre-dates the blog unfortunately.

At some point, my father donated to me 3 full size rack mount servers.  2 1U units, and 1 4U unit which I proceeded to use as my servers for a very long time, and also noted a distinct increase in my power bill :).  I hosted email for my father and I, as well as various friends, and run an IRC network, websites, this blog, databases and other stuff off these systems.  Over time, I started shutting them down, until all that was left was a single 1U (which happened to be the more powerful of the three) that powered everything.  That brings us to where I left you 2 years ago.

During a move between houses, I was staying in a too small apartment with my family and didn’t have room for these guys, so for this brief time all the LinuxNiche servers were hosted by Linode VPS, mainly because they were inexpensive, and with a friend I split the monthly costs for quite some time.  I don’t make (much, if any) money for any of the things I do with LinuxNiche, so cheap was paramount.

After moving into the new home, I dissolved the VPS and consolidated back onto the 1U Rack Mount.  However, as I mentioned above, I took my HTPC running an AMD APU and instead starting using that powered with Proxmox (which quickly became my favorite hypervisor) to run all my servers for about a year.  A few months back, I ran across this new(ish) technology in virtual systems called “Docker.”  Now, Docker isn’t technically “virtual servers”, they operate more like a fancy, easy to use, chroot jail for specific processes.  With Proxmox, I was using OpenVZ to run containers instead of full VM’s for resource management, but there is still overhead in running services inside the container that are unrelated to your application.  With Docker, this is slightly improved, because while the images are still large as you still have, for example, an entire Ubuntu root filesystem in there, you are actually only executing that one single application inside it. Now, there is lots of discussion out there about the usefulness (or lack thereof) of Docker, but it is gaining in popularity, and has a massive repository of containers at the Hub, so I wanted to play with it.

Therefore, I started using CoreOS on my AMD APU as my primary server.  Now, I was not overly happy with this setup, and it took a bit to get into the “mindset” of Docker containers over standard virtuals, but it eventually (mostly) worked.  However, CoreOS seems to have been designed to work inside of a “Fleet”, as such, when CoreOS would upgrade on me it just up and happily rebooted because it would be no problem, right?  The containers would just restart on another node lol.  Well… not so much.

So, as I stated earlier, I started collecting myself a nice little group of Single Board Computers and I decided – hey, let’s see what they can do and let’s use them for what they are really good for: Cheap, distributed computing clusters.

So, I then ventured into combining a bunch of my little Arm-based SBC/SOC’s into a Docker Swarm.  Now, I could not get my Minecraft server to adequately run on these little guys, just not quite enough power for that as my Minecraft world is fairly large, so I cannibalized my Chromebox mentioned above and it now runs my Minecraft server in a docker container as a headless unit.  The other server that won’t adequately run is an Emby Server.  It will run fine if you don’t use any transcoding capabilities, but if you’re streaming to, say, a PS4, Roku and/or Chromebox you’ll need a touch more power.  So this I’ve moved to my local workstation.

Beyond that, however, I have now built my own PicoCluster style box consisting of 2 Odroid-C1+’s, and a NanoPC-T1, running minimal ArchLinux Arm roots and Docker/Shipyard.  The NanoPC required a custom kernel, which was not easy (at least for me).  I also have a separate Odroid-XU4 that is attached to this same swarm that also serves all my media via NFS.  Because of the XU4’s gigabit ethernet with USB 3 support it is the perfect little file server.  I placed the 3x3TB drives from the above AMD PC into a 4 bay USB dock with a btrfs raid 5 configuration (yes, I know, Raid 5 bad – we’re not talking about mission critical stuff here and I perform daily backups of the server storage to another USB drive) and voila.  I am now using even less power than the AMD machine to power all my servers.

Now, Arm is not the target audience for Docker right now.  There is a few thousand images at the hub for armhf, but I couldn’t find exact matches made for Arm to replace my servers.  Therefore, what I did was find matches that I wanted to run based on an Image with an Arm equivalent, such as Ubuntu, Debian, or Alpine.  I then just cloned their git repo, changed the FROM line to pull in an armhf equivalent base, and it continued to build the rest of the base server system for me.  Mount over the configuration files and profit.  Luckily, there was already a shipyard hack for arm, and it took a bit to figure out the TLS work to encrypt them, but overall it worked.  This highlights one of the benefits of Docker, easy portability.

This then led to other issues with Docker I didn’t realize.  First, I found a lot of people recommend a “storage” container to contain shared files, and the “right way” to connect different “services” would be to “link” the containers, and so on.  Well, come to find out, though Shipyard manages a Docker “Swarm”, the swarm is really only apparently useful for automated scheduling/deployment to hosts with available resources, but there’s a limitation in Docker that if you “link” containers or try to grab the “volumes” from another container they have to reside on the same host.  This surprises me, as I thought Docker was intended to work in a large, wide-scale deployment scenarios, but if all linked containers must reside on the same physical host then there must not be a lot of linking going on in the larger environments.

It should also be noted that I did not setup my own image repository/library and push/pull images.  I built them myself on one of the C1’s, then utilized docker save to save them to tar files and scp them around to the other systems and imported them back in.  This is obviously manual work for something that is supposedly integrated into Docker to maintain, but I didn’t want to use a public repository or push to the Hub, nor did I want to go through the effort of building my own.

Swarm also provides no High Availability, migrating containers, or any other benefits you might think would be innate in a “cluster.”  Instead, I’m supposed to integrate in something like “etcd”, or “consul”, and back it with components like “docker-compose”, “Fleet”, “Kubernetes” and so on.  I have not actually done this yet, so I am not entirely sure how these work together, but I will get to it eventually.  Though, more likely, now that I’ve got Docker working and am familiar with how to use it and even appreciate some of it’s simplicity – I will probably start looking at alternatives and try a few of those.

Setup Image

All in all, this blog, along with my primary website, email, database, IRC, IRC services, HAproxy and about a half dozen other servers all run on this 4-node swarm, limiting (most) of the above to 128-256MB of memory per service.  You figure, with the above, I have roughly 5GB Memory and 20 processors with which to distribute my workload around.

Now, I’m clearly not suggesting this will rival any real super-computers out there, nor do I think in it’s current configuration it could survive the “Slashdot/Reddit Effect”, or “hug of death” by any means.  However, I believe it could be made stable enough.  Adding a few (or several as needed) more Odroid nodes, spinning up multiple copies of the web docker service and updating HAproxy to load balance between them?  Feasible.  I believe the limiting factor here wouldn’t be hardware, it would be the Home internet not intended to withstand that kind of traffic.

My list of components for this build:

  • Mediasonic 4 Bay Dock – Not part of the custom case.
  • 1x Odroid XU4 – Not a part of the custom case/box, has it’s own ethernet and separate switch.
  • 2x Odroid C1+ – Potentially upgrade to C2’s.
  • 1x NanoPC-T1 (Note: I would strongly recommend against this device.  Get another Odroid, or Pi.)
  • 3-4x MicroSD (or eMMC) – All of mine, save one, are powered by eMMC.  I keep one on MicroSD because I reload him a lot.
  • 3-4x USB Drive – Extra storage is desired.  This is separate from the 4 drives to go in the above bay.
  • 1x Coolerguys Dual 80mm Fans – May not be necessary, PicoCluster doesn’t use fans, I felt better with them.  There is a potential downside of dust.
  • 6x Acrylic Sheets – The ones I used are not the linked ones.  I got a single large sheet from a local hardware store and cut it to about 10″x10″.
  • 1x Qunqi Acrylic Clear Box – I’d recommend not doing this and making your own from the Acrylic Sheets instead, like this.
  • 1x InStream 7 Port USB Charger – Works great, claims 3A potential meaning it should power a C2 as well.
  • 1x Ethernet Coupler
  • 4x 6″ Micro USB – I used 2 from the InStream above, and purchased two more
  • 1x C14 to 3 Prong Y Adapter
  • 5x Cat6 Cables
  • 1x 5 Port Switch
  • Overall, this build comes to ~$330 at current pricing for 4 Odroid-C1+’s with 4 MicroSDHC and 4 32GB USB Drives.  With 16 GB drives, PicoCluster comes to ~$100 per node, but they are only offered in 3 and 5 node bundles.  This is 4 nodes for ~$330, so saving ~$100.  Making a 3 node this way would come to around $276, current pricing, so still cheaper but not as much.  Larger swarms would save more.  There is a few items missing, namely an HDMI F-F adapter and a USB port adapter.  Overall, this shows a couple things.  Namely, PicoCluster isn’t too terribly priced given the work it takes to build the above (plus, theirs is a little more professional and better power management, of course they’ve made a lot more :D), and if I were to start from scratch to do it over I’d probably just buy theirs though I wish they had more options than just the RPi2, but also DIY types can still penny pinch and get more powerful devices (C1/2 instead of Raspberry Pi 2’s.)  It also illustrates that I have no life, and should probably start a business selling half these things I put together for the hell of it.

    Happy Tinkering!