• Category Archives Technology
  • A bad hair day

    I arrived in just before 9AM this morning. It had been a good morning up to then, servers hadn’t done anything weird during the night, the emailed alerts didn’t show anything out of the ordinary and there wasn’t all that much going on so I could get things done for a change without fighting fires.
    While working on a few things I logged onto a server using remote desktop and as I always do, I started Jaws.
    Two seconds later alarms started going off all over the place!
    The server throughout a blue screen of death and wouldn’t respond to anything even when accessed using the network KVM.
    Of course, the server I took down had to be in the furthest building away. I had to go over and reboot it and make sure everything worked afterword.

    This is where things went even worse!

    The tip of my Cain isn’t moving. It’s jammed! So, it’s still usable but it’s not very easy.

    I made it over eventually.

    I got into the server room which by the way is in a lower basement.

    I unlocked the iPhone and looked for the saytext app so I could read the labels on the front of the servers to find the one that I needed to reboot. To my horror I hadn’t installed it after I lost everything on the phone a few days ago. I tried reinstalling it but because I was in such a bad location I couldn’t! I had to close everything and go back out of the basement to download it.

    This sounds easy but there are fire suppression systems to turn off, a KVM to fold up, racks to close and security systems to get in and out of.

    This server room has three air conditioning units and 16 racks so it’s very noisy so it’s probably the place I hate going to most.

    All because Jaws made a server crash.


  • Apple TV accessibility.

    I strongly encourage everyone to get the word out.
    Let me say first of all that I am not a so called “Apple Fanboy”.
    I do however believe that Apple have done more in the past three years for accessibility than any other company as in a short time they have change the landscape of mobile, desktop and now TV accessibility for the better.

    Recently we have found that devices such as the iPod nano, iPod Touch, iPhone and the Mac OSX operating system are as accessible as any other out there if not even more so in some situations. I’m now delighted to say that the Apple TV joins the lineup of accessible products made by Apple.

    Until today a visually impaired person had to use the apple remote software on an iPhone or ipod touch to access and play content independently on the Apple TV. With the release of IOS 4.1 for this it is no longer the case. The remote was efficient and perfectly adiquit but Voiceover support makes this the only real compelling solution for someone looking for this level of multimedia integration.

    I have recorded a review of the Apple TV with access through the Apple remote to demonstrate the flexability of that option. With the release of Voiceover support it was great to release a second guide. This walks users through updating the Apple TV, enabling voice over and navigating around music, videos, podcasts and playlists.

    All of the recordings relating to accessible media players are available on Listen and Learn Recordings.


  • Remote control module in SCCM

    I knew the remote control module in SCCM was very handy but I didn’t really bother with it until this morning. Mainly because I do very little with desktops now as I’m focused on the server and service side of things.

    I decided to have a quick look this morning though and my findings are a little frustrating.

    Firstly, There’s a full screen option in the view menu that doesn’t really do anything. I think it’s simply there in case you want to use remote desktop as the window is likely the same. This means that when using the native SCCM remote control module it doesn’t pass the windows key through to the remote system. This is always a pet peeve of mine. I hate it when I’m connected to a remote system and keys are passed through to my local machine. Using full screen view usually gets around this as options can be set that will specify that all keys should be passed through within this view.

    The second thing is that i don’t know why it’s not a standard RDP connection. This really gets on my nerves as if it was RDP then Jaws or most other screen readers that support remote desktop would then be able to read what was on the local box. It would also give more configurable options. Such as sending remote audio to the local machine etc.

    It’s probably about as useful as VNC. This will be great for most people but for me it’s very very frustrating.


  • Logwatch in Debian Linux.

    Installing Logwatch is very straight forward and it’s definitly worth taking a few minutes to do it. The format that it can send your system logs to you in is so nice and easy to read you’ll wonder how you ever kept track of your server without it.

    I like logs to be mailed to me every morning. These are the steps you need to take to get a similar report:

    1. Firstly run the following command to install Logwatch. I’m assuming you already have postfix and sendmail installed.

      apt-get install logwatch

    2. The config file you need to edit is located at:

      /usr/share/logwatch/default.conf/logwatch.conf

    3. I’d suggest replacing the following entries as follows:

      Line 35
      Output = mail
      Line 37
      Format = html
      Line 44
      MailTo = name@mydomain.com
      Line 45
      MailFrom = logwatch@mydomain.com
      Line 67
      Archives = No
      Line 70
      Range = yesterday
      Line 77
      Detail = Med

    4. Test your logwatch configuration by running logwatch on the command line.
    5. Create a new cron job to run this at 5:45AM every day. This is the time I generally get reports sent out. Backup jobs, Windows and Linux security and Logwatch reports are sent out during 5:30AM and 6AM so that things are spaced out.

      crontab -e
      45 5 * * * /usr/sbin/logwatch

    That’s all there is too it.

    Update on 27th January 2012

    Logwatch in some versions of Debian is slightly broken if you choose to format messages using HTML. To get around this you will need to download the package from source and install it. The instructions to do this are outlined below.

    1. Create a temporary directory to save the files to:

      mkdir /tmp/logwatch
      cd /tmp/logwatch

    2. Download the package from sourceforge by using the following command.

      wget http://ignum.dl.sourceforge.net/project/logwatch/logwatch-7.4.0/logwatch-7.4.0.tar.gz

    3. Unpack the archive that you downloaded in step 2.

      tar xzvf logwatch*

    4. cd to this directory.

      cd logwatch[tab]

      [tab] means that if you press the tab key on your keyboard the name of the directory / file will be automatically completed for you. When using the console this saves a lot of time.

    5. Make the install file executable.

      chmod 777 install[tab]

    6. Run the install script.

      ./install[tab]

    7. Answer all questions with the defaults by pressing the enter key.
    8. The config is now to be created in /etc/logwatch/logwatch.conf
    9. Use the lines above to specify what you want to configure.

    alternatively, run the following command replacing it with your own Email address of course. This runs logwatch and does not read from a configuration file.

    logwatch –output mail –format html –mailto joe.bloggs@MadeUpCompany.com –archives no –range Yesterday –debug Med


  • VOIP over any network over a VPN.

    I just wanted to summarise what I’ve been doing over the past few days.

    Firstly: A quick overview of the systems I have running here:
    Almost everything is virtualized and runs under ESX4.
    I have a few Windows 2008 boxes. One of these acts as a domain controler, the other as an exchange 2007 server, another is for forefront, another is a SQL server, another is for backups and then another is for administration and the VPN. I have a number of Linux servers as well. One is for web hosting, another is for the firewall and routing and the last one is for Linux hosted Email and mailing lists.

    Over the past while I’ve upgraded the antivirus for Exchange and client desktops to Forefront 2010.

    I also installed Microsoft system center data protection manager 2010. That’s working particularly well. I’m delighted with it. It’s backing up Exchange and a few major file shares as well.

    Most recently I’ve been tightening up on security. I’ve added a VPN and blah blah blah. That’s not very interesting.

    What is kind of cool however is with the VPN, I am able to connect to my Trixbox server. Oh, I forgot to mention that one at the start of this post. This provides all my home phone connectivity. So, no matter where I am in the world, I can still receive calls received by my home phone number.

    That’s kind of cool isn’t it?


  • Configuring the firewall on desktops and servers side to allow DPM 2010 to push out the client.

    Very simply, there are pages out there that will tell you what ports are required for distributing the Data Protection Manager agent but the problem is they don’t tell you what i required on the client side.

    This command will open the fireall to allow the agent name. This is messy and not really what would ordinarily like to do as it doesn’t really allow for alterations to most hardware firewalls but for a normal active directory network set up it will work fine.

    netsh advfirewall firewall add rule name=”Allow DPM Remote Agent Push” dir=in action=allow service=any enable=yes profile=any remoteip=123.123.123.123

    Just replace the IP address with the one you have assigned to your DPM 2010 server.

    You should probably remove this rule from your firewall after this is done.


  • backing up to a remote server using scp and checking your results.

    As promised, Here is the next part of my series on backing up a remote Linux server.

    This script is still quite straight forward but on the up side, the more straight forward it is, the easier it is to troubleshoot if something goes wrong down the line.

    It does a few things. It downloads all the archives in the backup directory, checks that their downloaded and if the check is successful it runs a check to make sure there are no problems. If something has gone wrong, it is logged to a file matching that date with an extention of .err.

    #!/bin/sh
    thisdate=$(date +%Y%m%d)
    backupstatus=failed
    logdir=/home/YourUserName/backups/logs
    backupdir=/home/YourUserName/backups
    mkdir $backupdir/$thisdate
    scp YourRemoteUserName@IPAddressOfServer:backups/*.gz /home/YourUserName/backups/$thisdate/ && echo $thisdate files downloaded from server into $backupdir >> $logdir/$thisdate.log && backupstatus=success
    if [ $backupstatus=”success” ]; then
    ls $backupdir/$thisdate/ && echo $thisdate files are in $backupdir/$thisdate >> $logdir/$thisdate.log
    tar ztvf $backupdir/$thisdate/*.gz && echo $thisdate archive archive checked and decompress correctly. >> $logdir/$thisdate.log && backupstatus=success
    ls $backupdir/$thisdate/ && backupstatus=failed1
    if [ backupstatus = “failed1” ]; then
    echo $thisdate The files did not download >> $logdir/$thisdate.err
    else
    tar ztvf $backupdir/$thisdate/*.gz 2> $logdir/$thisdate.err
    fi
    fi
    thisdate=
    backupstatus=
    logdir=
    backupdir=

    As always, I like to clean up my variables. The declarations of these are at the top and the bottom of the script.

    In the middle is where the interesting stuff is.

    As in the last script, the command after the && will only run after the first command completes successfully. Therefore, it’s a great way of easily checking for the right exit status.

    So, when I run ls on the directory that should hold that nights backups, I’m validating the check done above that the download was indeed successful.

    The next check is much more important. It makes sure that the downloaded archives are readable. notice the t switch after the tar command. “tar -ztvf”. Again, if this is not successfull, the log won’t be updated and the variable continues to be set to success.

    Of course, if things fail, I want to know why! So, that’s where the next if block comes in. Instead of just writing success or fail status messages to the logs, it puts something meaningful into the error log. By piping the errors from the tar command, we’ll see what has happened. Is the file there, or is the archive corrupt.

    Of course, there’s one draw back to this. What happens if not all the archives are generated on the server side? Well, that’s where the logs on the server come in to play. It would be nice to have them all together in one place but that’s an easy enough job using a few other commands.

    In the next part of this, I will look at backing up indevidual MySQL databases.


  • using RSA or DSA for authentication to a Linux server via SSH or SCP.

    Following on from my post yesterday about backups, I thought I’d give a further explination as to how to copy down the archives that I created in the script.

    For this, I’m using SCP. However, if using SCP, you ordinarily need to log on.

    If your prompted for a username and password every time your script runs an scp command, it’s kind of pointless having cron run the script at all.

    So, to get around the requirement to log in, while at the same time keeping the set up secure, we use an RSA or DSA key.

    for the rest of this post, I’m going to call the machines backup and server. The backup is the machine I am copying the backup files to.

    On the backup machine, type the following commands to generate the files and copy the public file across to the server. I suggest you use a very restricted account on the backup and server for this.

    ssh-keygen -t rsa
    hit enter for the first question to agree to save the key to /home/YourUserName/.ssh/id_rsa
    Hit enter without typing anything for the second and third questions as we don’t want a password for this particular key. Note, this is usually not recommended but it should be ok for this type of situation.
    It will tell you that a public and private key has been created and it will give you the finger print of the newly created key as well.

    Next, you will want to copy the public key across to your server. Note, the server is the machine that hosts your backup scripts.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/

    If this is the first time you’ve used a public key then use the following command as it will make things easier for you.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/authorized_keys

    If however you have used other keys, do the following:
    ssh YourUserName@ServerAddress

    Type your username and password to log in.

    Now, type the following to append the id_rsa.pub to the authorized_keys file.
    echo .ssh/id_rsa.pub >> .ssh/authorized_keys

    Now, leave the ssh session by typing exit.

    From the backup machine, you can now log in via ssh without providing a password.

    Note!!!

    You might want to secure your public key. If it goes missing, this could go very very baddly for you as this key does not require a password.

    Log into the server by typing:
    ssh YourUserName:ServerAddress

    Now, change the permissions of the file so that this restricted user account is the only one with read and write access to the public key
    chmod 600 .ssh/authorized_keys

    Now, get out of the ssh session by typing exit.

    The next step will be running scp to download your backups and verify that their readable. If their not, we’ll want to log the failure.


  • backing up a drupal site.

    I host a number of Drupal sites as well as wordpress and custom made ones as wel.

    When you host a site, one of the first questions your asked is do you have the ability to back up and restore my site if something breaks?

    For obvious reasons, that’s an important question. But, it’s a balancing act. It’s important to make sure you back up regularly but you don’t want to over do it and use up all your bandwidth on copying said backups off the server.

    So, for backups you need to separate them in to four parts.

    • Nightly Full server backups.
      If the server goes down, I want to be able to bring it back within 5 minutes.
    • Monthly Full site backups.
      These will be compressed archives that contain everything from the site including content and databases.
    • Weekly differential site backups
      These are stored on a server that mirrors the configuration of the primary. It is used for testing new server configs before they go live on the production server.
    • Daily site backups
      This is a backup of important site files that can become dammaged as a result of errors during an upgrade or configuration change. This does not contain a database backup but is very useful for very quick restores.

    With that in mind, I have created the final part of this puzzle. The following daily backup script archives the important directories in a drupal installation so their ready to be coppied by the remote server. I have these scripts saved to a location in the home folder of a very restricted account that is used simply for this task. A simbolic link in /etc/cron.daily points back to each of these scripts.

    #!/bin/bash
    thisdate=$(date +%Y%m%d)
    backupstatus=false
    tar -zcvf /home/UserName/backups/UserName.tar.gz /home/UserName/public_html/sites/all /home/UserName/public_html/sites/default/settings.php /home/UserName/public_html/sites/default/files/playlists /home/UserName/public_html/sites/default/files/js /home/UserName/public_html/sites/default/files/css /home/UserName/public_html/cron.php /home/UserName/public_html/includes /home/UserName/public_html/index.php /home/UserName/public_html/install.php /home/UserName/public_html/misc /home/UserName/public_html/modules /home/UserName/public_html/profiles /home/UserName/public_html/scripts /home/UserName/public_html/themes /home/UserName/public_html/update.php /home/UserName/public_html/xmlrpc.php && backupstatus=true
    if [ $backupstatus = false ]; then
    echo Error $thisdate Backup failed. >> /home/UserName/backups/UserName.log
    else
    echo $thisdate Backup completed without errors. >> /home/UserName/backups/UserName.log
    fi
    backupstatus=
    thisdate=
    chown RestrictedAccount UserName

    So, what am I doing there?

    • First, I declare a variable to hold the date.
    • Second, I declare a variable that holds the value false. If the archive command doesn’t work, this will never be set to true.
    • Next, I archive very specific folders. Notice, I’m not archiving /home/UserName/public_html/sites/default/files because that contains audio, pictures and videos and I really don’t want or need to include them in every days backup file because it would be far too large.
    • Notice that there’s a change to the BackupStatus variable at the end of the archive command. Because this starts with an &&, it will not be run unless the archive command is successfull.
    • Next, I use an if statement. If the backup status is false, I write to the error file. Notice that I put error at the start of the line. This just makes things a bit easier because I can look through the start of the log for a line that doesn’t start with a date.
    • Of course, if the variable comes back true, then the log file is updated to reflect that the archival job was successfull.
    • Finally, I do some clean up. I set both variables to blank values and make sure that the user who has only very few access privlidges can get the file.
    • I don’t doubt that there may be a better way of doing that, but this way works very well.

      On the other machine, a cron job is set to run very early in the morning to copy down these archives. With every archive it copies, it logs it on the remote server. That way, if what I call the copy job fails, I can see it and take any required action.

      I may be doing too many backups at the moment. With any process like this, it will take some analysis for a few weeks to determine if I can reduce the frequency of backups depending on the number of updates made to each site. Because I don’t host a huge amount, I can even tailor the back up schedule per site so as sites that are updated frequently are backed up more often.