Chat GPT Saved Me Time and Effort in Building a Dynamic Country Dropdown in Blazor

Introduction:

Have you ever found yourself drowning in a sea of code, desperately trying to build a seemingly simple feature for your application? That’s exactly where I was, until I discovered a superhero in the form of Chat GPT. In this post, I’ll regale you with the thrilling tale of how Chat GPT swept in, saved the day, and helped me build a dynamic country dropdown in Blazor—all while keeping my sanity intact. So, hold onto your coding capes and get ready for a wild ride!

The Dilemma:

It all started when I needed to implement a dynamic country dropdown in my Blazor application. I went looking for a CSV file containing a comprehensive list of countries and their corresponding codes. The task seemed simple at first glance, but as I delved deeper, the enormity of the challenge became apparent. Manually processing the CSV file, creating a C# class, and integrating it with a Blazor form component seemed like a daunting and time-consuming task. Who wants to do all that messing around for hundreds of countries?

Enter Chat GPT, My AI Sidekick

That’s when Chat GPT came to my rescue like a coding wizard in a digital cape. With its natural language understanding and coding capabilities, I quickly realized that it could handle the entire process for me. All I had to do was provide the CSV file and outline the requirements, and Chat GPT would take care of the rest. It was like having a knowledgeable coding buddy by my side.

Processing the Country List:

I started by feeding the CSV file to Chat GPT. It quickly parsed the data and generated a C# class with the necessary fields for country names and codes. This automated process saved me hours of manual data entry and ensured accuracy.

Crafting the Blazor Form Component:

Next up was the Blazor form component. Armed with the C# class generated by Chat GPT, I was able to seamlessly integrate it into my application. The country dropdown was dynamically populated with the countries and codes, all thanks to the magic of Chat GPT. It even provided me with sample code snippets to handle the selection and retrieval of the chosen country.

JavaScript Enchantment:

To ensure the accessibility of the dropdown for screen reader users, I needed a touch of JavaScript wizardry. And who would have thunk it? Chat GPT had just the spell for it. It offered me JavaScript interop methods to retrieve elements by their enchanting IDs and set focus with a wave of its metaphorical wand. With a sprinkle of JavaScript magic, I transformed the dropdown into a user-friendly and inclusive masterpiece. well, maybe not. That’s a bit over board, but damn, I was happy with it!

The Grand Finale: Empowered by AI:

What could have been an arduous task turned into a joyous coding adventure. By harnessing the power of AI, I unlocked new levels of productivity and creativity. The ability to delegate repetitive and time-consuming tasks to AI allowed me to focus on higher-level thinking and problem-solving.

Closing Thoughts: The Unseen Power of AI:

This thrilling coding adventure with Chat GPT taught me a valuable lesson: AI can be a powerful ally in our coding journey. By automating mundane tasks, it frees us to explore new possibilities, think outside the box, and create more efficiently. With the right tools and proper utilization, AI can truly empower us to achieve more, better.

Note: While Chat GPT played a significant role in streamlining my development process, it’s important to mention that human understanding, creativity, and intuition are irreplaceable in software development. AI should be seen as a supportive tool that enhances our abilities, rather than a replacement.

Changing the domain everywhere on a WordPress website. Especially useful for Divi.

Problem

You have a WordPress website but the domain eventually changes for some reason.

You change the domain name in the general settings page but for some reason, you find tha tnot all your iamges are appearing.

You look in the source and easily notice that there are a lot of specific links to your old domain name. Primarily from Dibi.

This isn’t all that easy to fix. For some reason, Divi installs files with the FQDN of your website.

Also, you may have a lot of posts with the old domain name.

So here’s how you can fix all that:

Solution

You will need file system level access and database acccess on the command line to achieve this in the most efficient way. If you don’t have this, perhaps ask your hosting company. I’m sure they would love the extra work…

Alright. I assume you are logged into the MySQL server using the MySQL command line. I’m also assuing you have moved into the correct database using the “use” command.

Must I say it? Back everything up.

Okay. use these commands to replace the domain name in the options, posts and postmeta tables:

update wp_options set option_value = replace(option_value, ‘oldDomain.com’, ‘newDomain.com’);

update wp_posts set post_content = replace(post_content, ‘oldDomain.com’, ‘newDomain.com’);

update wp_postmeta set meta_value = replace(meta_value, ‘oldDomain.com’, ‘newDomain.com’);

Okay. You’re done with the database. Exit out of that back to the command line.

Now update hte files in your website directory as follows: Again, you will need to update the path to your website here.

find /var/www/oldDomain.com/web ( -type d -name .git -prune ) -o -type f -print0 | xargs -0 sed -i ‘s/www.oldDomain.com/www.newDomain.com/g’

Escaping the cloud part 1 – Music

The cost of living is increasing. So I’m tightening my belt. Are you? Let me know what changes you are making.

One of the changes I have made lately is to remove my dependence on cloud services such as Spotify, iCloud, Google drive etc. There’s a handy feature in my online banking interface that highlights discressionary spending habits. It pointed out about six months ago that I was spending quite a bit for the family on cloud services. So I decided right then and there to fix this madness.

So today is about music. Why am I paying spotify every month? Why am I paying for Apple iCloud and music when there’s such major overlap here? It was just an organic waste that I let sit for far too long. So. What do I really need from a music service?

  • Always accessible
  • Must work with native IOS apps
  • Must work with Sonos
  • Must have a great search function
  • Must let me listen to all tracks at random
  • Must have a nice UI that my wife and children can use
  • Nice to have is multi user access and listening history
  • It also needs to have the usual thing of cover art, playlists, staring / ranking and all the other things you expect from a music service.

Down side: There were some artists that I listened to on Spotify and / or Apple music and I hadn’t yet bought their albums. I’m rather ashamed of this actually. I always make a point of purchasing music from artists that I enjoy. But I had gotten out of the habit lately. so I spent about two hundred Euro on music when I was making this change. I know what you’re going to say. This went completely against my aim to save money. But I promise you dear reader, I would have purchased that music anyway. At some point.

I tried a few services. Navidrome probably came the closest to meeting all my needs but it had one huge flaw. It didn’t support randomly shuffling all my tracks. I might be a bit strange, but I like to just sit down at my desk in the morning and let the music go where ever it wants. One moment it’s playing a slow air. Next it’s playing a mad fast tune with lots of energy. I enjoy it this way.

I settled on Airsonic advanced. It supports Sonos with some messing about and it fits the bill in every other way. It’s not often updated which will probably become a problem but there are a few other projects in the works out there that will hopefully meet my requirements in a year or two.

I had a few challenges:

  • I want my music to be stored on the NAS in my home. Not on the server.
  • My music has a lot of accented characters. I had major problems with mounting the Cifs file system and having those accented characters displayed correctly. I ended up writing a little bit of PowerShell that I ran from my main workstation to just clean up the accented characters. I will paste that below for anyone who wants it.
  • When I add new music, I need to run that script again. I also need to restart the docker container. Just because of the way I configured the mount. It’s probably not the most optomal solution. But it works and it’s really fast.
  • The Sonos configuration is a bit fiddly. But it’s not imporssible. Just be really careful when setting it up.

Other than that, it was all easy.

In the office. Getting less done.

This is absolutely nuts. I’m here writing this blog post a little after 11:30am because it’s something I can do without thinking too much. I’m in the middle of two very complex bits of work that require my full focus and consintration. But I can’t do what I need to because of the noise outside my window and inside the office. Currently, there are two people standing having a chat outside and there’s a very noisy saw running for the past ten minutes. Just before the saw there was someone talking very loudly in the office and there were two other coleagues trying to investigate a 2FA issue. These are all tasks that these people need to cary out and I absolutely don’t have any problem with what all these people are doing. It’s just that I simply can’t focus properly on these tasks. Just when I think the noise is going to stop, it comes back again. Like a knife through my ears.

I read a computer interface using a screen reader that provides feedback using synthesized speech through head phones that I wear constantly. So imagine this from my perspective. It would be like if you were trying to read complicated text off a page but I kept moving the page constantly and continually strobed the lights in the room. It would be very distracting and stressfull. Now imagine I stop for less than a minute and without warning began doing the same thing again. You will be on edge just waiting for the next distraction.

But this is only one part of the reason that this return to the office has me driven round the bend. This morning I left the house at just before 7:50am. The bus didn’t actually arrive until 8:20am and then we got stuck on the M1, the main road between Drogheda and Dublin for ages as a result of a crash. So I didn’t get to the office until 9:40am. If I was working from home, I would start at around 9am but I would have had a good long walk with the dog so my head will be in the right space to have a productive day. I get straight in and with my custom built office environment at home, I can be assured of absolute solitude and quiet.

It’s not all about work as well. At about 3:30pm, I’ll go in to the house and for about 10 minutes, I’ll catch up with my two children to ask them how school was. But I go back to my office again and I’ll easily work until 6pm without any problems. Because I know that when I finish, I have a 1 minute walk back into the house. Where as in Dublin, I’ll finish at 5pm on the dot and I know I have just over an hour before I get home. So there’s no insentive for me to continue working to get the job done.

It just seems sensless to me. My job is not about constantly engaging with people. I value my coleagues and I enjoy collaborating with them. But I find that I can do this equally well remotely. I can work more effectively when I’m not not in the office.

Rant over.

That damn saw is still running. I don’t know how I’m going to get anything done today.

Updating certificates on RDS. Remote Desktop Services

Every year now, I need to update the certificates on my Microsoft Remote desktop services servers.

This involves:

  • IIS front end
  • RDWeb Web client
  • components of RDS through server manager. Connection broker, gateway and web.
  • RDS gateway.

Rough instructions:

Install the certificate

  1. Open the MMC
  2. Click add snap in
  3. Choose Certificates
  4. Choose “Local computer”
  5. Choose this computer
  6. Expand Personal\Certificates
  7. Right click certificates and under all tasks, choose import certificate.
  8. Now import your new PFX file
  9. I recommend giving it a friendly name.
  10. Now right click this certificate and click more tasks.
  11. Click Export
  12. Follow the wizzard. Don’t export the private key.
  13. Save it to somewhere that will be easy to find shortly.

Update IIS

  1. Open IIS Internet Information Services
  2. Expand your server then expand sites
  3. Right click on Default website.
  4. Click Edit bindings
  5. Click on the HTTPS port 443 binding.
  6. Click Edit
  7. Choose your certificate using the friendly name that you configured earlier.
  8. Click Ok then Close.
  9. You can now close the IIS administration interface.

Update the RDWeb web client

You do this by unpublishing the client, importing the certificate then re-publishing the client again.

  1. Open PowerShell as administrator
  2. Type the following command to unpublish the web client.
    Publish-RDWebClientPackage -Type Production
  3. Now import the certificate using the following command. Replace everything between the <> with the path to the cer file you exported earlier.
    Import-RDWebClientBrokerCert
    Publish-RDWebClientPackage -Type Production -latest

Update the RDS service using server manager

  1. Open server management
  2. On the right, open RDS Remote Desktop services
  3. In your main deployment window, click the deployment tasks button.
  4. Click Deployment properties
  5. Highlight the certificates option on the left
  6. For each certificate, do the following:
  7. Click the certificate
  8. Click change
  9. Choose the second option. Ad dfrom file
  10. Browse to the PFX file
  11. Type the password
  12. Click OK
  13. Click Apply
  14. You will need to do this at least four times.

Update the certificate on the gateway

  1. Open the gateway manager
  2. Right click the gateway on the left
  3. Click Properties
  4. Move to the Certificates tab
  5. Choose the third button down to import a new certificate.
  6. Browse to the PFX
  7. Type the password when prompted

At this point, you will probably need to reboot the connection broker and front end servers.

Create a fresh shuffled playlist on your NAS every day.

Here’s what I needed:

I have a huge amount of music and for some reason, I just like it to play shuffled. When I hit play on the Sonos, I just want it to randemly go through and play the music on the NAS. This actually works beautifully on the Sonos. It doesn’t really need this. But I@m working on something interesting using HomeAssistant at the moment and it doesn’t support the simple requirement of playing all music in a folder. IT requires a playlist.

I already have a Linux server so each day at 3am, when everyone is in bed, a script will run to recreate and re-shuffle the playlist. So when new music is added, it will get added to the playlist and the order is always different.

first of all, put this in /etc/cron.daily but obviously, replace the parts in [] brackets to match your own values.

Okay. you have that file created?

Now set permissions on it using chmod 755.

Finally, add something to your crontab for each day at 3am. First, open crontab by typing crontab -e then press enter.

That’s all there is too it.

Deploy azure Log analytics extension through Azure Arc using policy. – Deploy effect missing?

This might interest some of you. I found an inconsistency with the policies available when browsing from within the Azure portal and the policies available from the Microsoft KB.

I am going round in circles with Microsoft documentation at the moment so thought I was going to need Microsoft support soon.

All I am trying to do is configure Windows updates on servers.

However, Windows updates requires that an Azure Automation Account is created.  This is done.

Then the update conformance reports require that an Azure Log Analytics workspace is created.  This is done.

However, Analytics workspace requires that I deploy a Log analytics agent to all azure Arc managed servers.  This is where I am encountering trouble.

I have about four options but two of them are preferable:

  •  DSC
  • Policy

Using policy, I can validate that the agent is not installed. However, based on the documentation the “effect” of the  Configure Log Analytics extension on Azure Arc enabled Windows servers policy should be “Install if not exists”. However, it only supports two “effects” “Disabled” and “Audit if not exists”.  So there doesn’t seem to be a remediate function within this policy contrary to the documentation.

The other way of doing this would be to install the DSC agent onto all Azure Arc servers.  This seems like overkill and I would prefer not to go down this rabbit hole if possible.  However, I have explored this in length. I have written a script, compiled it and it’s ready to go. However, again, I would need to deploy the DSC agent. So I’m back to step one again. deploying MSI’s through Azure Arc using policy.

I finally found that when I looked for the “Configure Log Analytics extension on Azure Arc enabled Windows servers “” policy from within the Azure portal, it had the “Deploy if missing” effect. Not just the “Audit if missing” effect. This has caused me four hours of messing around. I wish I just poked around and didn’t bother with the bad documentation.

Traveling for Tunes 2022 podcast

It’s that wonderful time of year again and with that minor annoyance of an international pandemic finally out of the spotlight, it’s again time for my session sprint. This year, it’s very different. I really enjoyed including the family. But there was still the usual trips to some of the normal towns as well.
I love getting your feedback, Drop me a note and let me know what you think.

PowerShell script to alert when an object is added or removed from an important group.

It’s Thursday at 12:27pm. YOu are sitting in a meeting. Meanwhile, someone has just gained access t o your domain admin group and now has the keys to your companies entire compute platform. But all is not lost. If you have the right alerting in place. Less than five minutes later, you receive an alert to say that the group membership has changed and you can now take immediet action. Saving the world… and your company… from absolute disaster! That’s what this PowerShell script does. Along with lots of checks. For example, let’s say someone get’s in through a back door and start’s messing around with systems. Well, if it doesn’t see the right number of logs in the folder, it starts to get a bit jumpy. That results in an alert as well. Or if the previous logs can’t be read, same thing happens. The world get’s an Email.
So here you go. Have fun with this. I encourage you to add more checks. This is been expanded since I published it to add even more validation to make sure that the script and the infrastructure doesn’t change. But this will get you well on the right road.

Audit all windows firewalls on your domain and display the results in a UI using PowerShell Universal

Do you want the code for this? No problem. Just skip down to the heading that says “Code here”.
Yeah yeah yeah. I know that I have given out plenty about Ironman Software and their PowerShell Universal product very strongly on a few different sites. But unfortunately for me, There’s just nothing else on the market that can wrap a nice easy(ish) UI around PowerShell scripts. So stick with me while I explain what I’m doing here.
My need:
Hey, first, lookup something called the star principal. It’s an Amazon Interview technique and I’m going to use it here to explain the last few days quickly and easily.
Star stands for:

  • Situation
  • Target
  • Action
  • Result

So the Situation is:

I need to provide a comprehensive, up to date, reproduceable and accurate report of the status of Windows firewalls on servers.

The Target is:

Re-use a script that I wrote two years ago, warap it in a UI and give that to the director so he can run this report or ask someone else to do it without coming to me any more than one time.

Action:

Ah. here’s where it get’s fun.
Firstly, here is how it hangs together:

  1. . I have all the processing in a PowerShell module. I’m comfortable working in the command line so having it in a module full of functions that I have written to get me through the day by removing repeditive tasks suits me well. But it doesn’t suit anyone else. Having PowerShell vomit out text to the director wouldn’t put me on his Christmas list. In fact, I’m already not on his Christmas list. Maybe I should go back to plain text? Pondering for a different day. Sorry. I went off on one there. Anyway, what I’m saying is I want to wrap that in a UI but I don’t want to rewrite code. Re-use and Re-cycle.
  2. I went in to look around PowerShell Universal for the first time in ages. I was getting weird errors when using powerShell5 where it wasn’t recognising stored secrets. But it turns out that the maximum time you can store a secret for is one year. So I suppose that’s just something I missed in some bit of documentation somewhere.
  3. Then, sometime over the past year, I tightened security on all of the service accounts so by by to storing Kerberos tickets in an active user session. This made me rethink how I was handling permissions for this script.
  4. Sometime in the past two years since I wrote this really great function, I got too clever for my own good. In other words, I over complicated it. Initially, I was just passing in a string as a parameter but then sometime, I must have decided that I wanted to throw custom objects with servers in it and I also started using the pipeline. What am I talking about? Okay. I’ll explain briefly.
    This is how you would pass something to a script using a parameter:
    First, let’s say we have an array called $MyWonderfulArray[] with several fields in it. ServerName and TrafficDirection. If the function doesn’t support taking the fields out of the pipeline, we need to explicitly loop through every item in this array and pass it the values for ServerName and TrafficDirection. That sounds kind of slow doesn’t it? Yeah. It is! Here’s an example:
    $ServerVariable = $MyCoolArray[0].ServerName
    $InboundOrOutbound = $MyWonderfulArray[0].TrafficDirection
    MyCoolFunction -Server $ServerVariable -TrafficDirection $InboundOrOutbound
    Now. firstly. You might ask what the idea of the [0] is. That’s just getting the first item in that array. I could loop over the array but this wasn’t meant to be a PowerShell tutorial.
    But now let’s take a quick look at using the pipeline. Let’s say your function expects two parameters. ServerName and TrafficDirection. Well, because these are already specified as fields in my array, I don’t need to explicitly pass them as parameters to the function assuming of course that I have configured the parameter section at the top of the function to support grabbing these fields through the pipeline. So now without needing to loop or even explicitly pass over the fields, I do this:
    $MyWonderfulArray | MyCoolFunction
    See? The pipeline is cool.
    But because I had changed the function, I was encountering infinit loops and some ocasional errors. That wasn’t too difficult to fix. I got it sorted within a few minutes.
  5. I found that tens of thousands of lines were added for some particular servers. Turns out that when ever a user logs into an RDS session host server running 2019, it creates a hole lot of firewall rules for that session. Okay. Anyway, I fixed that. It required painfully removing tens of thousands of rules then applying a registry fix to each session host server so that the problem doesn’t repeat in the future. Still, this took a good three hours tonight because as I was deleting so many rules each time, the MMC snapin kept freezing? Why didn’t I use PowerShell? Well, because there are about 40 other rules in there specific to the applications running on those session host servers and the last thing I want is someone from that facalty calling me on Monday morning with a room full of students anxiously waiting to start their labs while I try to figure out what rule in the tens of thousands that I removed caused this particularly horrible delay to their teaching and learning. so that really wasn’t fun.
  6. Next, I ran the script again but found that for some reason, one of the filters for traffic direction wasn’t working. I’m running this code using invoke-remote and it’s a non-native PowerShell command so sometimes they can behave in unexpected ways. Again, that wasn’t really difficult to sort. A where-object to only return the output that I wanted got around the problem. But you must understand oh most patient reader that each time I ran this script, it could take up to an hour or even two. It’s going across quite a lot of servers and really diving deep into the firewall rules, what they allow and what they reject. So each thing I changed even if it was minor took a long time for it to process.
  7. I had messed around with creating a UI for this a few years ago but I tidied it up tonight. I had a stupid bug in it. It was using the entire count of servers when reporting on the number of bad / dangerous rules. Now I have a separate variable with the count. Why I didn’t just do that a few years ago, I don’t know.

Result:

It all works. It took a lot longer than I would have liked but I’m really happy with the result. Something that anyone with the right level of permissions can independently use without my input.

Absolutely nothing in my life has gone to plan this week. Well, all I have had time for is technology problems so I suppose my life has just been technology. still though. I still need to get to another job tomorrow where I installed Cuda but the GPU isn’t found after a reboot. I spent three hours on that on Wednesday evening but now the person just wants me to install Docker and use Cuda and Kaldi through containers instead. That’s going to be another truck lode of fun but it’s going to have to wait until tomorrow because I’m tired.
Hey, for the record, I’m not really a fan of Nvidia at the moment either. Their documentation is out of date, their drivers are out of date and they mis and match terms. For example, at the top of the driver support page, they talk about Tesla T4 but then down the page they say how the driver only supports series 9 and above. How the hell am I meant to know what series the Tesla T4 is? Anyway, sorry. I’m rambling again.
Because I’m feeling very generous, here’s some code that will just change your life if you are administering a lot of Windows servers and you need to audit all the firewall configs.

The Code!