In the office. Getting less done.

This is absolutely nuts. I’m here writing this blog post a little after 11:30am because it’s something I can do without thinking too much. I’m in the middle of two very complex bits of work that require my full focus and consintration. But I can’t do what I need to because of the noise outside my window and inside the office. Currently, there are two people standing having a chat outside and there’s a very noisy saw running for the past ten minutes. Just before the saw there was someone talking very loudly in the office and there were two other coleagues trying to investigate a 2FA issue. These are all tasks that these people need to cary out and I absolutely don’t have any problem with what all these people are doing. It’s just that I simply can’t focus properly on these tasks. Just when I think the noise is going to stop, it comes back again. Like a knife through my ears.

I read a computer interface using a screen reader that provides feedback using synthesized speech through head phones that I wear constantly. So imagine this from my perspective. It would be like if you were trying to read complicated text off a page but I kept moving the page constantly and continually strobed the lights in the room. It would be very distracting and stressfull. Now imagine I stop for less than a minute and without warning began doing the same thing again. You will be on edge just waiting for the next distraction.

But this is only one part of the reason that this return to the office has me driven round the bend. This morning I left the house at just before 7:50am. The bus didn’t actually arrive until 8:20am and then we got stuck on the M1, the main road between Drogheda and Dublin for ages as a result of a crash. So I didn’t get to the office until 9:40am. If I was working from home, I would start at around 9am but I would have had a good long walk with the dog so my head will be in the right space to have a productive day. I get straight in and with my custom built office environment at home, I can be assured of absolute solitude and quiet.

It’s not all about work as well. At about 3:30pm, I’ll go in to the house and for about 10 minutes, I’ll catch up with my two children to ask them how school was. But I go back to my office again and I’ll easily work until 6pm without any problems. Because I know that when I finish, I have a 1 minute walk back into the house. Where as in Dublin, I’ll finish at 5pm on the dot and I know I have just over an hour before I get home. So there’s no insentive for me to continue working to get the job done.

It just seems sensless to me. My job is not about constantly engaging with people. I value my coleagues and I enjoy collaborating with them. But I find that I can do this equally well remotely. I can work more effectively when I’m not not in the office.

Rant over.

That damn saw is still running. I don’t know how I’m going to get anything done today.

Updating certificates on RDS. Remote Desktop Services

Every year now, I need to update the certificates on my Microsoft Remote desktop services servers.

This involves:

  • IIS front end
  • RDWeb Web client
  • components of RDS through server manager. Connection broker, gateway and web.
  • RDS gateway.

Rough instructions:

Install the certificate

  1. Open the MMC
  2. Click add snap in
  3. Choose Certificates
  4. Choose “Local computer”
  5. Choose this computer
  6. Expand Personal\Certificates
  7. Right click certificates and under all tasks, choose import certificate.
  8. Now import your new PFX file
  9. I recommend giving it a friendly name.
  10. Now right click this certificate and click more tasks.
  11. Click Export
  12. Follow the wizzard. Don’t export the private key.
  13. Save it to somewhere that will be easy to find shortly.

Update IIS

  1. Open IIS Internet Information Services
  2. Expand your server then expand sites
  3. Right click on Default website.
  4. Click Edit bindings
  5. Click on the HTTPS port 443 binding.
  6. Click Edit
  7. Choose your certificate using the friendly name that you configured earlier.
  8. Click Ok then Close.
  9. You can now close the IIS administration interface.

Update the RDWeb web client

You do this by unpublishing the client, importing the certificate then re-publishing the client again.

  1. Open PowerShell as administrator
  2. Type the following command to unpublish the web client.
    Publish-RDWebClientPackage -Type Production
  3. Now import the certificate using the following command. Replace everything between the <> with the path to the cer file you exported earlier.
    Import-RDWebClientBrokerCert
    Publish-RDWebClientPackage -Type Production -latest

Update the RDS service using server manager

  1. Open server management
  2. On the right, open RDS Remote Desktop services
  3. In your main deployment window, click the deployment tasks button.
  4. Click Deployment properties
  5. Highlight the certificates option on the left
  6. For each certificate, do the following:
  7. Click the certificate
  8. Click change
  9. Choose the second option. Ad dfrom file
  10. Browse to the PFX file
  11. Type the password
  12. Click OK
  13. Click Apply
  14. You will need to do this at least four times.

Update the certificate on the gateway

  1. Open the gateway manager
  2. Right click the gateway on the left
  3. Click Properties
  4. Move to the Certificates tab
  5. Choose the third button down to import a new certificate.
  6. Browse to the PFX
  7. Type the password when prompted

At this point, you will probably need to reboot the connection broker and front end servers.

Create a fresh shuffled playlist on your NAS every day.

Here’s what I needed:

I have a huge amount of music and for some reason, I just like it to play shuffled. When I hit play on the Sonos, I just want it to randemly go through and play the music on the NAS. This actually works beautifully on the Sonos. It doesn’t really need this. But I@m working on something interesting using HomeAssistant at the moment and it doesn’t support the simple requirement of playing all music in a folder. IT requires a playlist.

I already have a Linux server so each day at 3am, when everyone is in bed, a script will run to recreate and re-shuffle the playlist. So when new music is added, it will get added to the playlist and the order is always different.

first of all, put this in /etc/cron.daily but obviously, replace the parts in [] brackets to match your own values.

Okay. you have that file created?

Now set permissions on it using chmod 755.

Finally, add something to your crontab for each day at 3am. First, open crontab by typing crontab -e then press enter.

That’s all there is too it.

Deploy azure Log analytics extension through Azure Arc using policy. – Deploy effect missing?

This might interest some of you. I found an inconsistency with the policies available when browsing from within the Azure portal and the policies available from the Microsoft KB.

I am going round in circles with Microsoft documentation at the moment so thought I was going to need Microsoft support soon.

All I am trying to do is configure Windows updates on servers.

However, Windows updates requires that an Azure Automation Account is created.  This is done.

Then the update conformance reports require that an Azure Log Analytics workspace is created.  This is done.

However, Analytics workspace requires that I deploy a Log analytics agent to all azure Arc managed servers.  This is where I am encountering trouble.

I have about four options but two of them are preferable:

  •  DSC
  • Policy

Using policy, I can validate that the agent is not installed. However, based on the documentation the “effect” of the  Configure Log Analytics extension on Azure Arc enabled Windows servers policy should be “Install if not exists”. However, it only supports two “effects” “Disabled” and “Audit if not exists”.  So there doesn’t seem to be a remediate function within this policy contrary to the documentation.

The other way of doing this would be to install the DSC agent onto all Azure Arc servers.  This seems like overkill and I would prefer not to go down this rabbit hole if possible.  However, I have explored this in length. I have written a script, compiled it and it’s ready to go. However, again, I would need to deploy the DSC agent. So I’m back to step one again. deploying MSI’s through Azure Arc using policy.

I finally found that when I looked for the “Configure Log Analytics extension on Azure Arc enabled Windows servers “” policy from within the Azure portal, it had the “Deploy if missing” effect. Not just the “Audit if missing” effect. This has caused me four hours of messing around. I wish I just poked around and didn’t bother with the bad documentation.

Traveling for Tunes 2022 podcast

It’s that wonderful time of year again and with that minor annoyance of an international pandemic finally out of the spotlight, it’s again time for my session sprint. This year, it’s very different. I really enjoyed including the family. But there was still the usual trips to some of the normal towns as well.
I love getting your feedback, Drop me a note and let me know what you think.

PowerShell script to alert when an object is added or removed from an important group.

It’s Thursday at 12:27pm. YOu are sitting in a meeting. Meanwhile, someone has just gained access t o your domain admin group and now has the keys to your companies entire compute platform. But all is not lost. If you have the right alerting in place. Less than five minutes later, you receive an alert to say that the group membership has changed and you can now take immediet action. Saving the world… and your company… from absolute disaster! That’s what this PowerShell script does. Along with lots of checks. For example, let’s say someone get’s in through a back door and start’s messing around with systems. Well, if it doesn’t see the right number of logs in the folder, it starts to get a bit jumpy. That results in an alert as well. Or if the previous logs can’t be read, same thing happens. The world get’s an Email.
So here you go. Have fun with this. I encourage you to add more checks. This is been expanded since I published it to add even more validation to make sure that the script and the infrastructure doesn’t change. But this will get you well on the right road.

Audit all windows firewalls on your domain and display the results in a UI using PowerShell Universal

Do you want the code for this? No problem. Just skip down to the heading that says “Code here”.
Yeah yeah yeah. I know that I have given out plenty about Ironman Software and their PowerShell Universal product very strongly on a few different sites. But unfortunately for me, There’s just nothing else on the market that can wrap a nice easy(ish) UI around PowerShell scripts. So stick with me while I explain what I’m doing here.
My need:
Hey, first, lookup something called the star principal. It’s an Amazon Interview technique and I’m going to use it here to explain the last few days quickly and easily.
Star stands for:

  • Situation
  • Target
  • Action
  • Result

So the Situation is:

I need to provide a comprehensive, up to date, reproduceable and accurate report of the status of Windows firewalls on servers.

The Target is:

Re-use a script that I wrote two years ago, warap it in a UI and give that to the director so he can run this report or ask someone else to do it without coming to me any more than one time.

Action:

Ah. here’s where it get’s fun.
Firstly, here is how it hangs together:

  1. . I have all the processing in a PowerShell module. I’m comfortable working in the command line so having it in a module full of functions that I have written to get me through the day by removing repeditive tasks suits me well. But it doesn’t suit anyone else. Having PowerShell vomit out text to the director wouldn’t put me on his Christmas list. In fact, I’m already not on his Christmas list. Maybe I should go back to plain text? Pondering for a different day. Sorry. I went off on one there. Anyway, what I’m saying is I want to wrap that in a UI but I don’t want to rewrite code. Re-use and Re-cycle.
  2. I went in to look around PowerShell Universal for the first time in ages. I was getting weird errors when using powerShell5 where it wasn’t recognising stored secrets. But it turns out that the maximum time you can store a secret for is one year. So I suppose that’s just something I missed in some bit of documentation somewhere.
  3. Then, sometime over the past year, I tightened security on all of the service accounts so by by to storing Kerberos tickets in an active user session. This made me rethink how I was handling permissions for this script.
  4. Sometime in the past two years since I wrote this really great function, I got too clever for my own good. In other words, I over complicated it. Initially, I was just passing in a string as a parameter but then sometime, I must have decided that I wanted to throw custom objects with servers in it and I also started using the pipeline. What am I talking about? Okay. I’ll explain briefly.
    This is how you would pass something to a script using a parameter:
    First, let’s say we have an array called $MyWonderfulArray[] with several fields in it. ServerName and TrafficDirection. If the function doesn’t support taking the fields out of the pipeline, we need to explicitly loop through every item in this array and pass it the values for ServerName and TrafficDirection. That sounds kind of slow doesn’t it? Yeah. It is! Here’s an example:
    $ServerVariable = $MyCoolArray[0].ServerName
    $InboundOrOutbound = $MyWonderfulArray[0].TrafficDirection
    MyCoolFunction -Server $ServerVariable -TrafficDirection $InboundOrOutbound
    Now. firstly. You might ask what the idea of the [0] is. That’s just getting the first item in that array. I could loop over the array but this wasn’t meant to be a PowerShell tutorial.
    But now let’s take a quick look at using the pipeline. Let’s say your function expects two parameters. ServerName and TrafficDirection. Well, because these are already specified as fields in my array, I don’t need to explicitly pass them as parameters to the function assuming of course that I have configured the parameter section at the top of the function to support grabbing these fields through the pipeline. So now without needing to loop or even explicitly pass over the fields, I do this:
    $MyWonderfulArray | MyCoolFunction
    See? The pipeline is cool.
    But because I had changed the function, I was encountering infinit loops and some ocasional errors. That wasn’t too difficult to fix. I got it sorted within a few minutes.
  5. I found that tens of thousands of lines were added for some particular servers. Turns out that when ever a user logs into an RDS session host server running 2019, it creates a hole lot of firewall rules for that session. Okay. Anyway, I fixed that. It required painfully removing tens of thousands of rules then applying a registry fix to each session host server so that the problem doesn’t repeat in the future. Still, this took a good three hours tonight because as I was deleting so many rules each time, the MMC snapin kept freezing? Why didn’t I use PowerShell? Well, because there are about 40 other rules in there specific to the applications running on those session host servers and the last thing I want is someone from that facalty calling me on Monday morning with a room full of students anxiously waiting to start their labs while I try to figure out what rule in the tens of thousands that I removed caused this particularly horrible delay to their teaching and learning. so that really wasn’t fun.
  6. Next, I ran the script again but found that for some reason, one of the filters for traffic direction wasn’t working. I’m running this code using invoke-remote and it’s a non-native PowerShell command so sometimes they can behave in unexpected ways. Again, that wasn’t really difficult to sort. A where-object to only return the output that I wanted got around the problem. But you must understand oh most patient reader that each time I ran this script, it could take up to an hour or even two. It’s going across quite a lot of servers and really diving deep into the firewall rules, what they allow and what they reject. So each thing I changed even if it was minor took a long time for it to process.
  7. I had messed around with creating a UI for this a few years ago but I tidied it up tonight. I had a stupid bug in it. It was using the entire count of servers when reporting on the number of bad / dangerous rules. Now I have a separate variable with the count. Why I didn’t just do that a few years ago, I don’t know.

Result:

It all works. It took a lot longer than I would have liked but I’m really happy with the result. Something that anyone with the right level of permissions can independently use without my input.

Absolutely nothing in my life has gone to plan this week. Well, all I have had time for is technology problems so I suppose my life has just been technology. still though. I still need to get to another job tomorrow where I installed Cuda but the GPU isn’t found after a reboot. I spent three hours on that on Wednesday evening but now the person just wants me to install Docker and use Cuda and Kaldi through containers instead. That’s going to be another truck lode of fun but it’s going to have to wait until tomorrow because I’m tired.
Hey, for the record, I’m not really a fan of Nvidia at the moment either. Their documentation is out of date, their drivers are out of date and they mis and match terms. For example, at the top of the driver support page, they talk about Tesla T4 but then down the page they say how the driver only supports series 9 and above. How the hell am I meant to know what series the Tesla T4 is? Anyway, sorry. I’m rambling again.
Because I’m feeling very generous, here’s some code that will just change your life if you are administering a lot of Windows servers and you need to audit all the firewall configs.

The Code!

complex technical fuck up.

Ordinarily, I would say that adding bad language into a post title is something that should be avoided. But this was indeed a complex technical fuck up. Hey, on a slightly different topic, ever feel like walking into a meeting after something goes wrong and just saying bluntly something like: “That stupid fucken problem was a pain in my ass for days because the stars aligned to screw me. It’s like some divine ass hole thought to it’s self; “Hey Darragh over there is not quite busy enough. Let’s throw way more shit at him to see what happens”. Well better luck divine-o-ass. I’m not giving up that easily”. Damn. There were nested quotes in that rant. That’s a new best in shit writing isn’t it?

Okay. Okay. I’m calm now. I just needed to get that out of my system.

I hear you ask: who pissed in Darragh’s cornflakes this morning? Well, it was Docker. For the past two days.

Here is the rough outline of the crap I have had to deal with outside of work over the past few days.

Firstly, this all relates to HomeAssistant and docker.

  • It all started at the beginning of the week. I was at a wedding for two days and during that time, I noticed that the cert for HomeAssistant expired. That usually means that it has lost it’s connection to the cloud service. When I got back on Wednesday, I found that the subscription was valid but it had indeed lost the connection. I checked for logs that would indicate the source of the problem but there was no luck. Not a single log was written to suggest where the problem was. I was running on 2022-1 and 2022-3 was out so I suspected the container either needed a reboot or it needed the latest version installed. So that’s what I did. First, I restarted the container. That didn’t work. Second I updated. That didn’t work. Finally, I rebooted the host server. This is where the world went into free fall and everything broke.
  • The server came back up and I was met with a default “onboard” page for HomeAssistant. The air turned a shade of blue while I cursed. Thinking that it had reset the HomeAssistant install or something crazy like that. But no. I was able to find my files in the container. Here’s where everything went stupidly bad.
  • I have a few other things running on this Docker host. Yes. I know that really isn’t supported by HomeAssistant but I’m confident enough with Linux to make this work. I say this. But if you keep reading, you will seee that perhaps although I’m confident with Linux, maybe I have no right to be. Did I mess up? I’ll let you decide.
  • I ran docker ps to show the list of running containers. I could see hassio (short for home-assistant.io had four running containers. hassio_audio, hassio_multicast, hassio_samba and hassio_supervised. It looked like these containers were pointing back to where I had HomeAssistant stored. But it wasn’t picking up the right config. I thought to myself, where the hell are my other containers for Pihole, streaming and Unifi? But anyway. I didn’t think much about it. This is where I completely messed up. I should have stopped, thought and realized that if those containers were running they should be shown by docker ps.
  • I relinquished thought’s of this being a quick fix though and set up HomeAssistant as a new installation with the intention of restoring from a backup. Do you take backups? I do. Every night. I was thankful for this. Anyway, I keep rambling. I log into the new installation only to find HomeAssistant Supervisor isn’t available. This is a Core only install of HomeAssistant. Alarm bells begin rinning. Why the hell is this only the Core installation and where has my installation gone?
  • I try to completely uninstall this. Knowing that I had a full backup, I was willing to get a bit agressive at this point. The problem is I get an access denied error when I try to remove any of the containers with docker rm hassio_samba for example. I find that this is because of the hassio_apparmor service. But stopping it with systemctl stop hassio_apparmor.service doesn’t work. I found that it needs to be stopped with aa-teardown. Only then could I remove the containers.
  • So. I remove the containers and I try to install with this command:
    docker run -d –name=homeassistant –restart=always –network=host -v /etc/homeassistant:/config homeassistant/home-assistant:stable
    That didn’t work. I got errors like this:
    Failed to start hassio-apparmor.service: Unit hassio-apparmor.service has a bad unit file setting.
    I’m still not sure what caused that. But I moved on. I found that for some reason, the hassio_apparmor and hassio_supervisor files weren’t removed from /etc/systemd/system/ so I deleted these and the problem went away.
  • I was encountering lots of weird errors so I took a step back and started looking at everything on the server. During the small hours of this morning, I finally found something that triggered an oh crap moment. I found a tutorial that mentioned installing HomeAssistant from the snap store in Ubuntu. I know I didn’t do this. But while I was looking for HomeAssistant files during one of the many times I manually uninstalled this, I remember seeing files in /snap. So I had a moment of realization. Snap must be installed! Now, I have checked my .bash_history and that of the root account. Not once did I issue a command with the word snap in it. SO I have no idea why this is installed. I ran one command and this answered all my questions.
    whereis docker
    Sure enough. there’s a second binary for Docker in /snap/docker. Running
    snap list
    shows that snap-docker is installed.
  • I remove this:
    snap remove docker
    Then I reboot
  • Victory! Now I run docker ps and I see my missing docker containers such as the oen for Ubiquity, Pihole etc. I also see the docker containers for the propper installation of HomeAssistant. But here’s where I shot myself in the foot. I had completely mangled those containers while rampaging through the file system looking for and purging anything that could be causing conflicts during those times that I was encountering errors. The problem now is that the origional and correctly set up docker containers are completely messed up. I try reinstalling using the propper version of docker but the images and the containers are in a terrible state. I’m not able to reinstall because there are images that still exist in a partial or damaged state. (Yes. I really screwed this up didn’t I?). However, I can’t give up. I manage to delete the images by finding the ID’s of each image and passing them to the ps rmi command. Sometimes these had dependencies that couldn’t be removed because they were too mangled. So I used docker rmi -f (imageID).
  • Afterwork, I used updatedb and locate to find all existing homeassistant and hassio files related to a container. I manually removed these and started the installation again.
  • for the record, I find that the most reliable way to isntall the HomeAssistant with HomeAssistant-Supervisor docker containers is to use these Deb installers:

Don’t do what I did. After 3am this morning, I was tired and I installed the container first then the os agent. HomeAssistant complained that the supervisor wasn’t running in privledged mode. But a quick restart of the container fixed this.

What a complete pain in the ass. This blog post is long. But this pales in comparason to the hours and hours I spent on this until the early morning hours for the past few days.

I will say one more thing. I read a post a few months ago where someone said that they started off with a Combee II Zigbee USB device but then upgraded to something a little more serious. In my firm opinion, the Combee II stick is simply amazing and I doubt there is anything else on the market like it. I restored my HomeAssistant config and because the Combee II keeps an independent record of all the Zigbee devices that are connected to it, once the HomeAssistant config was reapplied, the Combee stick just worked. No fuss, no complaints. Having this independent bridge outside the HomeAssistant ecosystem has saved me from a lot of work twice now. Now, of course, I regularly take backups of that config as well. Just in case.

Building a high performance compute server on Azure and installing KenLM and Cuda/Kaldi with NVIDIA Tesla drivers.

About a week ago, I was asked to build a new server. This is going to be used for research purposes so the spec is quite high. 16 dedicated CPU cores, 110GB RAM and an NVIDIA Tesla T4 GPU. It’s running on Azure and the applications needed on it are a little different. So this was a lot of fun.

First the VM type: It’s a Standard_NC16as_T4_v3 server. You can’t just go buy one of these. You must create a support request with Microsoft so that they can release the number of cores required for this specific type of server. This is a painful process! There were 200 processor cores available in that subscription but obviously not at the right type. However, there is a very useful category when creating a support request in the Azure Portal for requesting additional cores. What isn’t so useful is the portal didn’t understand that I had enough cores. I needed the specific cores for this research server. I spoke to a HPC (High Computer Performance) specialist about something unrelated during the week and he knew what I was talking about right away. But it took over a week for Azure Support to understand what I was looking for then make the required changes.

Moving on, Once Microsoft did what they needed, setting up the new server wasn’t difficult. It was created within about 10 minutes after I finished with the VM creation wizzard.

The main requirements of this server are Cuda and KenLM and this is really what this post is about. I don’t spend every day in a Linux environment. So when I need to install something like this that I wouldn’t use often, I rely heavily on documentation. It’s not that I couldn’t go hunt down all the installation sources and dependencies. But that would be a waste of time. And time is not something I really like to waste.

I took notes during this process. These include the commands that I used to install everything and the various sources I read through to learn a bit more about what I was installing and how it could and should be done.

In case anyone copies and pastes the following lines, I am going to proceed my comments with #.

# First you need to determine the GPU that you have and the suggested driver. Fortunately, this is way easier than it used to be.
apt install ubuntu-drivers-common
ubuntu-drivers devices

# Do not use this next command. It installs way too much and will result in massive dependency issues when you go to install Cuda.
# ubuntu-drivers autoinstall

# After installing the GPU driver, you must reboot.
reboot now

# The following command will install the NVIDIA gPU driver. It will also install the unmet dependencies.
apt install nvidia-driver-470 libnvidia-gl-470 libnvidia-compute-470 libnvidia-decode-470 libnvidia-encode-470 libnvidia-ifr1-470 libnvidia-fbc1-470

# This will install all of the Cuda dependencies.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
apt-get update
apt-get -y install cuda

# Add the Cuda binaries to your path:
echo 'export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}' >> ~/.bashrc

# You can test that Cuda is installed and that the version installed is as expected as follows:
nvcc --version

# IF at some point, you need to start again, this one-liner will remove all the NVIDIA and Cuda packages that you might have installed using aptitude / apt-get.
# apt clean; apt update; apt purge cuda; apt purge nvidia-*; apt autoremove apt install cuda

# The following lines will install KenLM on Ubuntu 20.04.
apt-get update
apt-get install build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev
apt-get install build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev -y
git clone https://github.com/kpu/kenlm
cd kenlm/
mkdir build
cd build
cmake ..
make -j 4
make install

HomeServer updates 2022!

Oh this post could become very large. So I’m going to try to keep it brief. Perhaps I’ll pad it out with a few more posts over the next few days or weeks. But here goes.
My home server set up for 2022.
First of all, what’s all this for. Why do I need a home server? What is it used for?
My requirements for a home server have changed a lot over the past 20 years. Home servers for me started as Email and web servers then progressed into DHCP and DNS servers as well as firewalls with big noisy and powerfull beasts running under my stairs then running in self contained cabinits that were custom built for the task.
However, About five years ago, I decided I would move away from hosting my own DHCP and DNS servers and instead I would just go back to off the shelf solutions. Such as those provided by my ISP router and the Ubiquity controller for my wireless network. That has been fine. In fact, it has worked very well. However, it required a few small servers from time to time for testing technologies or ideas that I had. Raspberry Pi 4 has been my tiny compute platform of choice. But this started to get a bit messy. For example: I got more into home automation. So a Pi was dedicated to that. Previously, a Pi was running my Ubiquity Unifi controller and the code for some of my light automation. I was also frustrated a lot by the lack of decent customization in relation to DNS on the Fritzbox router. So here’s what I’m running right now.

  • PiHole for DNS. This is primarily working as an add blocker for all phones, tablets and computers on the network.
  • HomeAssistant. This handles all my home automation. I no longer even have a Philips or Aqara gateway / hub. I’m instead using a Combee ii USB stick as the Zigbee gateway. This will require some more explination.
  • The Unifi controller software for my Ubiquity wireless access points.
  • RClone. This is handling the processing and access to my cloud files.
  • Navidrone. This is my new audio server software. I’ll need to explain why that is needed in another post.
  • Bonob. This is a bridge between Navidrone and my Sonos. Used to let me play the media directly on my Sonos. Okay. I’m going to give you a quick overview of what I’m doing here because in my opinion, it’s kind of cool.

I’m running a large NAS in the house. But it’s getting old. It’s probably 8 years old by now. But it’s reasonably large. Running at 8tb usable storage space in RAID 5. Replacing that NAS isn’t something I’m very interested in doing for two reasons. Firstly. The cost would be huge. But second, it’s a big noisy thing. I could go for a quieter model but to get that kind of storage from solid state disks would cost a lot of money. So again. I suppose it comes down to cost. I’m going to need a NAS. That is unavoidable. But But thanks to an idea from a friend, I will need a lot less space.

So. How am I going to use less space while not removind a lot of files? Simple. Cloud storage. But that leads to another problem. How do you integrate cloud storage into your every day work flows and systems. For exmaple. If you store your music on Google apps or OneDrive, how does Sonos access it? It’s simple. It can’t. Not directly anyway. So here’s where for me it get’s interesting.

Firstly, understand that I wouldn’t just dump all the music up there. Because I have privacy concerns. I have aquired this music on CD over a very long time. It is mine but I would have a concern that if I start uploading 2tb of music, Microsoft or Google are going to start getting suspicious. Actualy, this is a founded concern. Paul Thurrott had this problem with OneDrive about four years ago. So I encrypt the files before sending them to the cloud service of choice. This really complicates things because now there’s really no hope of something like my Sonos reading the files because now they are on the cloud and they are also encrypted.

So. here’s how I get around it:

  1. I use RClone to encrypt and copy all files before I copy them from the old NAS up to the cloud storage.
  2. Now I mount the encrypted volume from RClone.
  3. I have set Navidrome up to look at this volume for it’s music
  4. Bonob then connects to Navidrome.
  5. Sonos is configured to use Bonob as a music service. Bonob is connected into Navidrome so the flow is: Sonos asks for musi c from Bonob. Bonob get’s that music from Navidrome. Navidrome get’s the file from the encrypted mountpoint by RClone. This encrypted mountpoint in turn goes to the cloud storage. All this happens within a maximum of four seconds. But although this sounds like a lot of time, it’s really not and also that 4 seconds is only really an issue when starting playback for the first time. When the Sonos is moving to the next track, it allows plenty of time to pre-cach the next track before playing it.

Have you read this far? Good. You’re officially a geek / nerd. Well done. I’m genuinely proud of you. There’s one more thing to just edge the geek factor up another notch.

Twenty years ago, this would have been running on several physical servers. Ten years ago it would have been running on one big beafy computer with several virtual machines dedicated to each function. In this generation of containers, this is all running on a mini-pc with an I7 processor, 16GB RAM and a 512GB NVME drive. Before the enterprise compute gurus jump out of your skin to tell me that there’s no redundancy here. You are absolutely right. But settle yourself down for a second. I’m going to talk about redundancy and backups now in a second.

Everything is running on Docker containers. So once I have backups do I really care if the computer dies? well, yeah. I would care because this little computer is really nice and it runs way faster than I had expected. But realistically if it dies, all I do is build a new host operating system, bring my docker containers back over to it, bring the containers up, configure networking and everything is back again. It’s not an enterprise environment with 100% up time. The main thing that matters is that it’s cheap to run, quiet, runs at a cool temperature and if something really goes wrong, I can update easily. I have the encryption password and salt saved somewhere safe completely disconnected from the server so once I can decrypt the encrypted backups, all is good. … I hope.