Jaws scripts for Audacity.

I like using audacity. I would probably do something in it at least once a week. So I’m quite happy to stumble across these scripts on Git hub.

I haven’t tested them yet but they look great.


DIY Smart Kitchen Planner

Emma was quite rightly pointing out to me that it’s impossible for her to keep track of where I am from one day to the next. Between work and gigging, I have a busy life! She wanted to put a horible paper calender onto the fridge but the geek in me just thought that was far too retro. Lol. I hate that word.

So I built this. It takes my Office365 calender and a family to do list and it displays them along with the time and the local weather on a tiny 7 inch screen that sits nice and tidy on a shelf in the kitchen. Here’s a quick demonstration.

Error in IOS: Unhandled Promise Rejection.

This is less of a solution blog post and more of a rant against the fucken stupidness of Apple and how they are driving me absolutely crazy. The problem is, developers put up with their shit because let’s face it, their app store is incredible, they give reasonably good shares of proffits and people think their devices are sexy. But for a developer working outside the Apple Ecosystem, I.E. outside the app store, Apple is a thunderous pain in the ass. Excluse my colourful language but this is the second time in six months that Apple’s deliberate curtailment of web applications running in IOS browsers has caused me serious problems and massive time investments.

The first problem was a few months ago. I found that the .play Javascript event wouldn’t trigger when the screen was locked. This problem isn’t in Android. But on IOS, when a screen is locked, Javascript no longer runs. It’s a simple solution to a complex problem with the aim of improving battery life. It’s not so bad. I’ve been able to get around it.

The second problem though is more complicated.
Here’s the main problem. I’m creating a complex application where thousands of valuable audio tracks will be available for people to listen to using a web interface. Without decent security, it would be possible for someone with some basic scripting skills to download every single track. So therefore several layers of authentication are required. Here’s what happens in summary. I’m leaving a lot of information out because of course, I don’t want to give away my secrets and I also don’t want to tell would-be thieves how to get around what I’ve done.

  • A user clicks play or play all.
  • The browser asks the server for permission and sends on a password that’s unique to that session along with some other identifiable information.
  • The server responds with the required authentication that will enable that track to be streamed.
  • the browser then uses that information to request the track.
  • The server receives a correct request with all of the security information so it sends the track to the browser for streaming.
  • The browser streams the track.

The problem is, technically, the user hasn’t directly made the request to listen to that track by clicking or tapping something. Technically for IOS, the user made the request to validate the security information. But as there’s a conversation going on in the background between the server and the browser, the browser made that request without the intervention of the user. Therefore, the browser stupidly considers the audio streaming step as an automated action so doesn’t actually play the audio.

How am I going to get around this?

I’m not completely sure. I have ideas. Some involve reducing the security which isn’t an acceptable course of action. But instead of authenticating eaach streaming request, I could authenticate the page then use that authentication for the streaming requests on that page. It’s not as nice or as solid as the other method but for damned IOS, it will likely have to do.

I use IOS every day. The iPhone is my primary mobile device. I like the interface and the Voiceover screen reader is just brilliant. But from the perspective of a developer, I have developed deap and justifiable frustration toward Apple. They are trying to fource me into developing a native app for Ceol FM. It’s definitly in the plans to do this but there’s only so much time I can spend on this.

There have been others who have had this problem. Here are a few related forum and blog posts:

Trialing out cut aways in video.

I’m amazed by the advancements made that now allow seriously complex content creation and publishing right from a phone. In such a short space of time, the potential has become huge. I can’t help wondering, if the expectations of blog readers has grown in paralel? Are text based blogs relegated to the history books in favor of multi media rich blog content?

PFSense – SNMP configuration fails with UTF characters.


The error

pfsense /services_snmp.php: XML error: Undeclared entity error at line 166 in /conf/config.xml 

Is shown under Status \ system logs after saving the SNMP configuration in PFSense.


The SNMP xml file doesn’t support UTF8 characters such as á, é,í, ó and ú.


Use standard alfa-numeric characters.

I hope that helps

All sites now on HTTPS.

I know no one cares about this stuff. But as part of the upgrades I’ve completed over the past two months, every site I directly maintain on these servers is now accessible exclusively over HTTPS.

Technically none of these sites need HTTPS. They don’t accept payment or personal details but still, it’s nice to know that when you go to a site the traffic is secured.

Moving PFSense WAN from private to public IP range


You need to change the public IP address of your PFSense appliance / virtual machine to an address on a different subnet. This might be most useful if you have set up PFSense on a LAN and you now need to move it to a WAN.


The error WAN IP is on a different subnet than default Gateway is displayed when changing the WAN IP address.

If you had PFSense configured with the WAN on a private address range, you cannot then move PFSense to a public IP address range using the web interface.


Although you can’t complete this task using the web interface, you can do this through the shell. Connect to the shell either using the maintenance menu or using SSH and use option 3 to reconfigure the interface IP addresses. Enter the new IP and gateway details for the WAN then reboot. Note from this point on, you will not be able to access PFSense from the old IP address so make sure you are prepared for this.

Second problem

When you change the gateway, the old gateway is still the default.

You will not be able to reach the Internet from servers behind PFSense and you will not be able to reach the Internet from the PFSense console / shell.


Login to the PFSense web UI then change the default gateway.

This is managed under System \ Routing.

First remove the old gateway on the private address range.

Then under the gateway table, select the default gateway from the list. You can’t leave this to auto in my experience as this doesn’t automatically set the one and only gateway remaining in the list as the default.

Reboot PFSense again to make sure everything is still working as expected.

Connecting to Hyper-V from a non-domain joined Windows 10 workstation.

I wrote something about this topic before. The first blog post regarding this connection from a non-domain joiend Windows 10 to Hyper-V is here.

The problem at that stage was a firewall rule. For remote management of HyperV, a more open firewall rule needs to be created to allow RemoteRM from a different subnet. As I’m administering Hyper-V from a client on a VPN connection on a different subnet, this is a requirement that hadn’t been there previously.

The following explains the client side configuration of the Windows 10 workstation that is required so that it will connect to the Hyper-V server.

Firstly the error:

When you try to connect to the Hyper-V server using the Hyper-V snap in, you’ll see the following error:

Hyper-V Manager
Delegation of credentials to the server
“{ServerName}” could not be enabled.
CredSSP authentication is currently disabled on the
local client. You must be running with administrator
privileges in order to enable CredSSP.


Firstly, set your network interface and the VPN interface to private. You would have previously done this through the home group settings but as windows 10 1809 no longer has this functionality, you’re better off just using our great friend PowerShell.

Get-NetConnectionProfile -InterfaceAlias “WiFi” | Set-NetConnectionProfile -NetworkCategory Private
Get-NetConnectionProfile -InterfaceAlias “Ethernet 5” | Set-NetConnectionProfile -NetworkCategory Private

Please be aware that you will need to change your network interface aliases. IF you don’t know these, type get-netConnectionProfile in PowerShell.
Now add the servername to your host file. This is in c:\windows\system32\drivers\etc\hosts to do this, open NOtepad as an administrator, open that file then add a line to the end of the file in the format {ServerName}
Substituting the IP address of your server and the server name.

Next enable PowerShell remoting and give the remote system delegated access your workstation. Only do this if you trust the local machine but I hope all of this goes without saying the obvious. ONly do any of this if you are sure that the remote host and indeed your workstation is trusted.

Set-Item WSMan:\localhost\Client\TrustedHosts -Value “{ServerName}”
Enable-WSManCredSSP -Role client -DelegateComputer “{ServerName}”

There’s one last thing to do. Configure your local policy to Allow delegating fresh credentials with NTLM only server authentication.

  1. Click start.
  2. Type gpedit.msc then hit enter.
  3. Expand Computer Configuration \ Administrative Templates \ System \ Credentials Delegation.
  4. On the right, double-click delegating fresh credentials with NTLM only server authentication
  5. Click Enable
  6. Click the Show button
  7. Provide the value wsman/{ServerName}
  8. Click OK
  9. Click OK again.

Of course, change {ServerName} with the name of your server.

that’s all there is too it. Just open the Hyper-V console and connect to your Hyper-V server. Provide credentials in the format {ServerName}\{UserName}.

Getting certificates working in NGINX, ISPConfig and LetsEncrypt.

My goal when moving everything onto new servers was to also use the brilliant service from Let’s Encrypt to encrypt web traffic. Of course, Let’s Encrypt doesn’t provide the same kind of verification to end users of the validity of the identity of a site as a service such as Digicert or Thoat but these services generally charget about €100 per certificate per year so for sites such as this, it’s not necessary.

I love Let’s Encrypt. Creating the certificate is easy. So too is renewing it. It’s all just so straight forward! Plus it’s all done through either the ISPConfig UI or through a shell.

This leads me to the reason for this post. I’ve found forums with people saying that ISPConfig 3.1.3 wasn’t assigning newly created certificates. But I just couldn’t figure out why. I didn’t want to go onto the unstable branch of ISPConfig where this problem has been fixed so I assumed that creating the cert using the shell would be straight forward. I’ve done this quite a few times before. The certbot package handles this.

The problems started here. I got errors like this:

[Errno 2] No such file or directory: ‘.well-known/acme-challenge/NCvDRv3…’

That gave me the idea to check {WebsiteName}/web/.well-known/acme-challenge. The owner was root so I changed it to the website user and related group. I then added a test file and tried to access it using HTTP. This didn’t work.

Then I remember that NGINX is set up on this server to allow specific URL types. WordPress is the CMS that is in use on this server for the most part so the rules are defined to only allow WordPress generated URL’s. My expectation is this would stop malware infecting sites. Based on experience I’ve had with WordPress vulnerabilities giving rise to exploits that escentially gave file system access. Malware would then be placed into a content directory where it would be caught by search engines.

So I tried to put in a location rule that would allow .well-known/acme-challenge. This still didn’t work.

I did a bit more digging and found that ISPConfig had already done something similar on the virtual host config file. So although I was being smart and modifying the main NGINX config for the server with the expectation that I was going to need this functionality for every site that I’m hosting, my change was being overridden by the virtual host settings defined by ISPConfig.

Once I figured that out, the change to the command was simple.
Change from

certbot certonly –webroot -w /var/www/{SiteName.tld}/web -d {SiteName.tld} -d www.{SiteName.tld}

Change to

certbot certonly –webroot -w /usr/local/ispconfig/interface/acme -d {WebSite.tld} -d www.{WebSite.tld}

It ran perfectly.

The next few challenges were with configuring the virtual host. Again, this stemmed from the fac that ISPConfig was my preference for handling all of this so it’s config got in the way or at the very least pointed me in the wrong direction initially.
Firstly, when it generated the certificates, it put them into the virtual directory under /var/{WebsiteName.tld}/certs/ But it added .key files. Let’s Encrypt created .pem files. So I didn’t want to take the differences at face value, I wanted to make sure that the differences were just coincidental.
Modifying the paths to point instead to /etc/letsencrypt/live/{WebsiteName.tld/cert.pem and Privkey.pem was obvious. Once that was done NGINX handled HTTPS traffic as expected. Here’s the related certificate lines:

listen *:443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/{WebsiteName.tld}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{WebsiteName.tld}/privkey.pem;

The last part then was to make all HTTP traffic redirect to HTTPS. This was quite straight forward as well. I just added a second server block to the top of the {WebsiteName.tld.vhost} file in /etc/nginx/sites-available/ with the following entries.

server {
listen *:80;
server_name {WebsiteName.tld} www.{WebsiteName.tld};
return 301 https://www.{WebsiteName.tld}$request_uri;

Check your configuration:

service nginx configtest

Restart the NGINX service when ready.

service enginx restart

If something goes wrong, check the related logs:

systemctl service nginx.status

I hope this helps you. I didn’t get to the bottom of why ISPConfig isn’t associating the right certificate to the virtual host. That’s for another time I suppose.

Note on 22nd February 2019.

I was encountering an annoying problem when maintaining this site through the WordPress app. I was getting Curl errors relating to SSL. A bit of digging around showed that the intermediat certificate wasn’t found. This Digicert page was very useful in verifying that yes, something was wrong with my certificates on these domains. The problem was simple. I had specified cert.pem instead of fullchain.pem so the issuing certificate authority wasn’t included in the cert. I should have known this was going to cause issues.

phpmyadmin missing mbstring


You open PHPMyadmin and you get this error:
phpmyadmin missing mbstring


It’s easy to fix. just type this at a prompt on the server:

sudo apt-get install php-mbstring

Then restart apache with:

sudo systemctl apache2 restart


Now, why oh why would this suddenly happen for no reason that I can think of?  I haven’t used PHPMyadmin in years on this server so it’s probably a package that got removed during an apt-get autoremove at some point but still. It’s obviously in use, why just automatically remove it.

Nothing in this job that I’m doing at the moment has been easy. The following posts have been written. I’ll explain what I’m doing very shortly.

I’ll give you a very brief idea of what I’m doing.
I’m spending too much on cloud hosting. I have several projects on the go. Most are aimed at promoting traditional Irish music.

  • Ceol FM online streaming service. Promotes traditional Irish music around the world.
  • Music at the Gate. Promotes traditional Irish music in Drogheda. I would love my children to grow up surrounded by Irish culture.
  • Darragh Pipes. I love playing music. This promotes my own performances.

Then I also run this site, Computer Support Services and a few other websites from different servers as well.
It’s costing far too much.
So I’ve bought a reasonably powerful box, installed Hyper-V and I’m running everything off several virtual machines.
When I’ve had free time, I’ve been working on migrating everything across. But sites like Ceol FM provide additional functionality over and above just a simple website so migrating this functionality isn’t straight forward. Obviously hosting everything on server is tricky to do properly. Security needs to be a priority as does monitoring and bandwidth / resource control.

Blog Archive