Connecting to Hyper-V from a non-domain joined Windows 10 workstation.

I wrote something about this topic before. The first blog post regarding this connection from a non-domain joiend Windows 10 to Hyper-V is here.

The problem at that stage was a firewall rule. For remote management of HyperV, a more open firewall rule needs to be created to allow RemoteRM from a different subnet. As I’m administering Hyper-V from a client on a VPN connection on a different subnet, this is a requirement that hadn’t been there previously.

The following explains the client side configuration of the Windows 10 workstation that is required so that it will connect to the Hyper-V server.

Firstly the error:

When you try to connect to the Hyper-V server using the Hyper-V snap in, you’ll see the following error:

Hyper-V Manager
Delegation of credentials to the server
“{ServerName}” could not be enabled.
CredSSP authentication is currently disabled on the
local client. You must be running with administrator
privileges in order to enable CredSSP.

Solution

Firstly, set your network interface and the VPN interface to private. You would have previously done this through the home group settings but as windows 10 1809 no longer has this functionality, you’re better off just using our great friend PowerShell.

Get-NetConnectionProfile -InterfaceAlias “WiFi” | Set-NetConnectionProfile -NetworkCategory Private
Get-NetConnectionProfile -InterfaceAlias “Ethernet 5” | Set-NetConnectionProfile -NetworkCategory Private

Please be aware that you will need to change your network interface aliases. IF you don’t know these, type get-netConnectionProfile in PowerShell.
Now add the servername to your host file. This is in c:\windows\system32\drivers\etc\hosts to do this, open NOtepad as an administrator, open that file then add a line to the end of the file in the format
1.2.3.4 {ServerName}
Substituting the IP address of your server and the server name.

Next enable PowerShell remoting and give the remote system delegated access your workstation. Only do this if you trust the local machine but I hope all of this goes without saying the obvious. ONly do any of this if you are sure that the remote host and indeed your workstation is trusted.

Enable-PSRemoting
Set-Item WSMan:\localhost\Client\TrustedHosts -Value “{ServerName}”
Enable-WSManCredSSP -Role client -DelegateComputer “{ServerName}”

There’s one last thing to do. Configure your local policy to Allow delegating fresh credentials with NTLM only server authentication.

  1. Click start.
  2. Type gpedit.msc then hit enter.
  3. Expand Computer Configuration \ Administrative Templates \ System \ Credentials Delegation.
  4. On the right, double-click delegating fresh credentials with NTLM only server authentication
  5. Click Enable
  6. Click the Show button
  7. Provide the value wsman/{ServerName}
  8. Click OK
  9. Click OK again.

Of course, change {ServerName} with the name of your server.

that’s all there is too it. Just open the Hyper-V console and connect to your Hyper-V server. Provide credentials in the format {ServerName}\{UserName}.

Getting certificates working in NGINX, ISPConfig and LetsEncrypt.

My goal when moving everything onto new servers was to also use the brilliant service from Let’s Encrypt to encrypt web traffic. Of course, Let’s Encrypt doesn’t provide the same kind of verification to end users of the validity of the identity of a site as a service such as Digicert or Thoat but these services generally charget about €100 per certificate per year so for sites such as this, it’s not necessary.

I love Let’s Encrypt. Creating the certificate is easy. So too is renewing it. It’s all just so straight forward! Plus it’s all done through either the ISPConfig UI or through a shell.

This leads me to the reason for this post. I’ve found forums with people saying that ISPConfig 3.1.3 wasn’t assigning newly created certificates. But I just couldn’t figure out why. I didn’t want to go onto the unstable branch of ISPConfig where this problem has been fixed so I assumed that creating the cert using the shell would be straight forward. I’ve done this quite a few times before. The certbot package handles this.

The problems started here. I got errors like this:

[Errno 2] No such file or directory: ‘.well-known/acme-challenge/NCvDRv3…’

That gave me the idea to check {WebsiteName}/web/.well-known/acme-challenge. The owner was root so I changed it to the website user and related group. I then added a test file and tried to access it using HTTP. This didn’t work.

Then I remember that NGINX is set up on this server to allow specific URL types. WordPress is the CMS that is in use on this server for the most part so the rules are defined to only allow WordPress generated URL’s. My expectation is this would stop malware infecting sites. Based on experience I’ve had with WordPress vulnerabilities giving rise to exploits that escentially gave file system access. Malware would then be placed into a content directory where it would be caught by search engines.

So I tried to put in a location rule that would allow .well-known/acme-challenge. This still didn’t work.

I did a bit more digging and found that ISPConfig had already done something similar on the virtual host config file. So although I was being smart and modifying the main NGINX config for the server with the expectation that I was going to need this functionality for every site that I’m hosting, my change was being overridden by the virtual host settings defined by ISPConfig.

Once I figured that out, the change to the command was simple.
Change from

certbot certonly –webroot -w /var/www/{SiteName.tld}/web -d {SiteName.tld} -d www.{SiteName.tld}

Change to

certbot certonly –webroot -w /usr/local/ispconfig/interface/acme -d {WebSite.tld} -d www.{WebSite.tld}

It ran perfectly.

The next few challenges were with configuring the virtual host. Again, this stemmed from the fac that ISPConfig was my preference for handling all of this so it’s config got in the way or at the very least pointed me in the wrong direction initially.
Firstly, when it generated the certificates, it put them into the virtual directory under /var/{WebsiteName.tld}/certs/ But it added .key files. Let’s Encrypt created .pem files. So I didn’t want to take the differences at face value, I wanted to make sure that the differences were just coincidental.
Modifying the paths to point instead to /etc/letsencrypt/live/{WebsiteName.tld/cert.pem and Privkey.pem was obvious. Once that was done NGINX handled HTTPS traffic as expected. Here’s the related certificate lines:

listen *:443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/{WebsiteName.tld}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{WebsiteName.tld}/privkey.pem;

The last part then was to make all HTTP traffic redirect to HTTPS. This was quite straight forward as well. I just added a second server block to the top of the {WebsiteName.tld.vhost} file in /etc/nginx/sites-available/ with the following entries.

server {
listen *:80;
server_name {WebsiteName.tld} www.{WebsiteName.tld};
return 301 https://www.{WebsiteName.tld}$request_uri;
}

Check your configuration:

service nginx configtest

Restart the NGINX service when ready.

service enginx restart

If something goes wrong, check the related logs:

systemctl service nginx.status

I hope this helps you. I didn’t get to the bottom of why ISPConfig isn’t associating the right certificate to the virtual host. That’s for another time I suppose.


Note on 22nd February 2019.


I was encountering an annoying problem when maintaining this site through the WordPress app. I was getting Curl errors relating to SSL. A bit of digging around showed that the intermediat certificate wasn’t found. This Digicert page was very useful in verifying that yes, something was wrong with my certificates on these domains. The problem was simple. I had specified cert.pem instead of fullchain.pem so the issuing certificate authority wasn’t included in the cert. I should have known this was going to cause issues.

phpmyadmin missing mbstring

Problem

You open PHPMyadmin and you get this error:
phpmyadmin missing mbstring

Solution

It’s easy to fix. just type this at a prompt on the server:

sudo apt-get install php-mbstring

Then restart apache with:

sudo systemctl apache2 restart

Notes

Now, why oh why would this suddenly happen for no reason that I can think of?  I haven’t used PHPMyadmin in years on this server so it’s probably a package that got removed during an apt-get autoremove at some point but still. It’s obviously in use, why just automatically remove it.

Nothing in this job that I’m doing at the moment has been easy. The following posts have been written. I’ll explain what I’m doing very shortly.

I’ll give you a very brief idea of what I’m doing.
I’m spending too much on cloud hosting. I have several projects on the go. Most are aimed at promoting traditional Irish music.

  • Ceol FM online streaming service. Promotes traditional Irish music around the world.
  • Music at the Gate. Promotes traditional Irish music in Drogheda. I would love my children to grow up surrounded by Irish culture.
  • Darragh Pipes. I love playing music. This promotes my own performances.

Then I also run this site, Computer Support Services and a few other websites from different servers as well.
It’s costing far too much.
So I’ve bought a reasonably powerful box, installed Hyper-V and I’m running everything off several virtual machines.
When I’ve had free time, I’ve been working on migrating everything across. But sites like Ceol FM provide additional functionality over and above just a simple website so migrating this functionality isn’t straight forward. Obviously hosting everything on server is tricky to do properly. Security needs to be a priority as does monitoring and bandwidth / resource control.

Recording from the Grey Goose weekly session

Every Monday night there’s a great session in the Grey Goose in Drogheda.  This is a tune that I recorded last night.

I picked up this tune about twelve years ago while touring for music around Israel.  By chance I met up with a Romanian group that loved traditional Irish music.  For days we swapped music.  This was one of the tunes I picked up from then.  Years later, it’s still a tune that I really enjoy to play.

Windows 2019 product key fails with error (0x80070490)

This seems to be a very commonly encountered problem based on about three minutes hopping around sites from Google.

But the solution is very easy. I stumbled across it as while I’m preparing a test server running Windows 2019, I’m also multi tasking by installing a KMS server.   That’s a story for another time.

Anyway. The installation of the product key fails with the error (0x80070490).

To get around it, just install the product key from the command line.

Open up the command line as an administrator and issue the following command.  Of course, replace your product key with the MAC product key that you’ve obtained from Microsoft.

cscript c:\windows\system32\slmgr.vbs /ipk [your product key]

If only things could be straight forward.

Tech failures

I’m good at my job. Hand me a virtual infrastructure build on VMware, Hyper-V or even Xen, active directory, any version of Windows, most Linux distributions, PHP, MySQL, NginX, Postfix, Currier, Exchange, Azure, Office365, GSUite and all that and I’ll give you a system that’s efficient, stable and cost effective. I can even dabble in .net, php, vbs, powershell, bash, SQL and at a stretch, c and mongodb.

So I’m quite confident in what I’m doing normally.

But January has been a complete Fuster Cluck of problems that have either taken me far too long to figure out or are still waiting on my ever growing to do pile of things I need to get around to fixing or finishing.

This kind of thing happens.  Integrating technologies doesn’t exactly come with an instruction manual.  Sometimes it is a suck it and see situation.  So, here’s a few of the things that haven’t gone right in January.

  •  During a meeting last week, I decided it would be brilliant if I could share my Microsoft To do notes to other people in a project that I’m working on. I love Microsoft To do. It’s just so easy to keep track of what I need to be doing and the priorities of different tasks.  I would have appreciated the power it would have provided if I could find out what others on this project are doing as well.  So, I configured SCCM to connect to Azure then to the Microsoft Education store, I provisioned the To do app and after some messing about, I got it installed on a test machine.  It ran perfectly under my test account. Next I installed it on someone elses PC.  It finally installed but it’s not available because it requires Exchange online to be enabled. We’re a Google house here so To do is simply not going to work.  I wouldn’t mind so much but to even get this far I had to:
    • Create an Azure active directory app in Azure.
    • Figure out where the private key was located in the Azure UI. Turns out I was in a slightly different place than I should have been.  That’s the stupid thing about the Azure UI, you can’t necessarily get to the keys for an application from the properties of that application.
    • Then I encountered the problem of how to assign devices to Azure active directory accounts.  This is required for the Microsoft Education Store to allow apps to be installed.
    • Then finally, most of the documentation said that when you look under Online Licenses in SCCM, you should see the available store apps. What they don’t say is that you will need to manually provision any apps that don’t get provisioned by default.
  • I was very happily surprised by PFSense.  The UI is nearly 100% accessible. I’m delighted because a few years ago it wasn’t that straight forward for a screen reader user to navigate.  But getting this running wasn’t all that easy.  I wrote a blog post about Configuring PFSense last week. But in summary:
    • Routing broke.
    • After a rebuild of the config, it worked again. NO idea why.
    • Key generation and association for OpenVPN didn’t work as expected.  But I got that working eventually.
    • Got load balancing working. Yay!
    • My Irish Broadband router then decided it was going to see the wrong IP address for the PFSense virtual machine. I’ve flushed that device several times and each time it gets it wrong.   It’s seeing an old device.  NO idea why.  The device is only running at 15% usage and 28% power but it’s user interface is running very slow. That’s a problem for another day.
  • .net problems have terrorised me for nearly two months now. Here’s the problem:
    • I inherited a large application from a company who are no longer supporting it effectively.  This application broke as a result of a change that was made outside our control on the infrastructure  that is hosting this service.
    • I worked with this bad company for about a month. But it was clear to everyone that I was coming up with better ideas to fix this than the people who were actually meant to be developing it. so, in frustration, I tok over the code in december.
    • The part of the code that broke as a result of the infrastructure change is now fixed.  But the fix depends heavily on .net 6.2.  The system was written in .net 4.5 so upgrading it shoudl be straight forward. But no.  I was struck again.  The code that I’ve written uses newer versions of the libraries that are already in use in .net4.5.  Updating those libraries breaks the main application.
    • I could go on and on about this but it’s very complicated.  I’ve sat at my desk until 3:30am in the morning trying to get my head around this but I’m not getting very far.  I have 27 conflicts left.  Each time I encounter a conflict, I need to explicitly reference the correct library and version.  HOwever, in the event that conflict is communicating with a clas in the main application, directing the code at that class and therefore including it for compilation may or may not add dozens of other conflicts.  If I’m lucky, it will just compile.  Generally it does, but when it doesn’t, it can set me back days.  Each time I find a conflict, I have to open the old version of the code and verify the library / namespace that it was previously using.

Yep. It’s all a complete Fuster cluck.

I’m tired. I’m not getting enough sleep because I’m not good at switching off while I have things that need to be fixed or finished.  But then I’m finding it’s very hard to get motivated because i have had such a long string of problems and I’m constantly tired.

Don’t worry. I’ll break through this cycle. Things will start falling into place.  I’ll keep working away at it.  This isn’t the first or the last time several systems have caused me problems all at the same time.

A note about Microsoft To do.  If you haven’t tried it, give it a go.  I think you’ll like it.  If you find yourself needing that kind of thing.

OpenVPN configuration in PFSense.

I spent about six hours this weekend installing PFSense, configuring the firewall and setting up OpenVPN. Here’s a quick run through of the problems and the solutions.

LAN to WAN access

I’m not using VLan’s.  The main purpose of running PFSense is I wanted to have traffic filtered through a reasonably decent firewall sitting on a virtual machine.  All the servers that I’m going to use are on the one hyper-V host.   I don’t want to open up a lot of ports going to these services for both general front end access and back end administration.

With the use of a VPN for back end administration, I’ll have three networks in total to set up.

  • WAN interface.
  • LAN network
  • VPN client network.

At the start, prior to configuring OpenVPN, routing between the WAN to the LAN was fine.  But after the configuration of OpenVPN I had problems with routing from the LAN out to the WAN.

I wish I could say I found a solution to this. But I didn’t.  When I fixed the routing issue, I then lost all access to the LAN.  So I restored the factory defaults and began configuration again.

The second time I configured PFSense the routing issues I had were not encountered.

Certificates

When configuring OpenVPN, I had problems generating the client. The first time It said I had no RCL.  Second time I had no user cert and the third time the server cert wasn’t from a trusted CA.

Here’s what I did to fix all of that:

  • Created a new user. This user doesn’t have admin access. This is a good idea for VPN use anyway. This new user has a user certificate assigned.  This user certificate is created from the CA on the server.
  • I don’t know why but the server certificate created by the OpenVPN server wizard wasn’t signed by the Route CA on the server. I also couldn’t delete that certificate. Instead, I just created a second server certificate and in the properties of the server, I selected that new certificate.
  • No CRL. IF a CRL is required by the OpenVPN server, I’m not sure why it wasn’t created by the wizard. But in the properties of the OpenVPN server, a handy link is provided to bring you right to the CRL tab under the Certificate options.

All of these items were easy to fix.  They seem like bugs in the OpenVPN server creation process.

Routing

This one took a while to fix.   I was able to access the PFSense LAN address from VPN clients but I couldn’t access any other devices on the PFSense LAN.

  • Using netstat -r in Windows confirmed that the route was added.
  • There were no firewall rules blocking traffic. But on the up side, I also added tighter  rules to specifically allow the traffic that was needed between VPN devices and the LAN.

I thought it was strange that I could access the LAN devices from the PFSense console.  So after a lot of thinking, I finally decided to add the routes from the other direction.  From the LAN devices to the OpenVPN network.  There’s a way of doing this in OpenVPN I’m sure however if explicitly configuring the routes on the LAN devices, try one of these two commands.

For Windows:
route add 10.0.1.0 mask 255.255.255.0 [PFSenseLANGateway]

For Linux:
ip route add 10.0.1.0/24 via [PFSenseLANGateway]

Finally. It’s all working.

I used PFSense a lot about eight years ago. I had problems at the time running it in a VM. Now though I’m delighted that I’ve started using it again. I love the UI and it’s all very logical.  I look forward to using the lode balancer functionality soon as well.

Trying to drag the good out of a busy week.

While walking home from work last Friday evening, I was feeling particularly thankful.  It had been a very busy week and I was really looking forward to switching off for a few hours.  I reflected to myself: Each week, we all learn something.  It might be from a casual conversation, a book, a news report or even online on social media. So here’s a few things that I picked up last week.

  • I spoke to a medical missionary nurse on the way home from Dublin on Wednesday.  She started travelling around the world offering help in 1982. Since then she has helped in Nigeria, Kenya, Gana and brazil. When not working as a nurse and a midwife, she was also helping young nuns to as she put it, “make sure this life suited them”.  I found the conversation with this woman fascinating.  She explained that many of the areas she worked in transitioned from providing hospital care / primary care facilities to providing community support and preemtive care. The lady didn’t agree with this entirely but she saw the motivation behind it.  The young nurses she had once helped to train are hoping that reducing the need for hospitals will reduce the burden on the emerging health care system.  She was also telling me that NGO’s such as the red cross took over from medical missionary nurses from the mid to late nineties.  At their peak, there were 250 medical missionary nurses in her organization. Now in recent times there are about 150 MMN’s at any one time.
  • I took three short courses this week on Linked in learning. The first was on Azure architecture fundamentals, the second was on Windows 2019 differences compared to 2016 and the third was specifically related to Active directory on 2019. Each course was about two and a half hours long.  I can’t say I learned anything groundbreaking.  Especially relating to azure, I had done all of this before but it’s good to go back over the basics in case there’s something that’s been forgotten over time. But still, it was reasonably useful.

Azure Point to Site VPN – Add or replace certificates.

A year ago I set up a new environment for a company who decided to host everything in Azure.

I set up the virtual machines, the storage, the backups and everything that came along with that.  I also gave them a Point to Site VPN connection so they could independently make changes and modify / add data as needed.

Today that VPN connection stopped working.  Why? Simple.  The cert expired. Microsoft have written great documentation on this topic but by default, the root and client certificates only last for one year.  That’s for security reasons of course.  Each year, you renew your certificates and if someone has a certificate that should no longer be allowed, that cert becomes invalid. Nice and easy.

However, in addition to using certs, I also have accounts that I can modify on the local machines and each group of people have a different route cert so replacing certs isn’t a major problem.

That said, I wanted the certs to last longer than 1 year.  I could have made them last 10 years but I thought 3 years was a happy medium.

You could of course create the scripts using a GUI but here’s a faster way that uses Powershell.

$date_now = Get-Date
$extended_date = $date_now.AddYears(3)
$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature
-Subject "CN=P2SRootCert" -KeyExportPolicy Exportable

-HashAlgorithm sha256 -KeyLength 2048
-CertStoreLocation "Cert:\CurrentUser\My" -KeyUsageProperty Sign -KeyUsage CertSign -Notafter $extended_date

Now create the client cert using this.

New-SelfSignedCertificate -Type Custom -DnsName P2SChildCert -KeySpec Signature
-Subject “CN=P2SChildCert” -KeyExportPolicy Exportable
-HashAlgorithm sha256 -KeyLength 2048

-CertStoreLocation “Cert:\CurrentUser\My” `
-Signer $cert -TextExtension @(“2.5.29.37={text}1.3.6.1.5.5.7.3.2”) -Notafter $extended_date

When you’re ready, open the route cert.  Remove the lines at the top and bottom of the file that indicate the start and end of the certificate then in Azure, browse to All Resources \ Your VPN Gateway,  Configure Point to Site VPN

Now add the new root certificate.

When you’re ready, download the VPN client.  ON the same Screen in the Azure portal, click Download VPN client.

 

If needed, remember to export your certificate.  Include to private key and give the exprrted PFX file a good strong password.