• Category Archives Server administration
  • Technology » Server administration » Azure
  • Remote control module in SCCM

    I knew the remote control module in SCCM was very handy but I didn’t really bother with it until this morning. Mainly because I do very little with desktops now as I’m focused on the server and service side of things.

    I decided to have a quick look this morning though and my findings are a little frustrating.

    Firstly, There’s a full screen option in the view menu that doesn’t really do anything. I think it’s simply there in case you want to use remote desktop as the window is likely the same. This means that when using the native SCCM remote control module it doesn’t pass the windows key through to the remote system. This is always a pet peeve of mine. I hate it when I’m connected to a remote system and keys are passed through to my local machine. Using full screen view usually gets around this as options can be set that will specify that all keys should be passed through within this view.

    The second thing is that i don’t know why it’s not a standard RDP connection. This really gets on my nerves as if it was RDP then Jaws or most other screen readers that support remote desktop would then be able to read what was on the local box. It would also give more configurable options. Such as sending remote audio to the local machine etc.

    It’s probably about as useful as VNC. This will be great for most people but for me it’s very very frustrating.


  • Logwatch in Debian Linux.

    Installing Logwatch is very straight forward and it’s definitly worth taking a few minutes to do it. The format that it can send your system logs to you in is so nice and easy to read you’ll wonder how you ever kept track of your server without it.

    I like logs to be mailed to me every morning. These are the steps you need to take to get a similar report:

    1. Firstly run the following command to install Logwatch. I’m assuming you already have postfix and sendmail installed.

      apt-get install logwatch

    2. The config file you need to edit is located at:

      /usr/share/logwatch/default.conf/logwatch.conf

    3. I’d suggest replacing the following entries as follows:

      Line 35
      Output = mail
      Line 37
      Format = html
      Line 44
      MailTo = name@mydomain.com
      Line 45
      MailFrom = logwatch@mydomain.com
      Line 67
      Archives = No
      Line 70
      Range = yesterday
      Line 77
      Detail = Med

    4. Test your logwatch configuration by running logwatch on the command line.
    5. Create a new cron job to run this at 5:45AM every day. This is the time I generally get reports sent out. Backup jobs, Windows and Linux security and Logwatch reports are sent out during 5:30AM and 6AM so that things are spaced out.

      crontab -e
      45 5 * * * /usr/sbin/logwatch

    That’s all there is too it.

    Update on 27th January 2012

    Logwatch in some versions of Debian is slightly broken if you choose to format messages using HTML. To get around this you will need to download the package from source and install it. The instructions to do this are outlined below.

    1. Create a temporary directory to save the files to:

      mkdir /tmp/logwatch
      cd /tmp/logwatch

    2. Download the package from sourceforge by using the following command.

      wget http://ignum.dl.sourceforge.net/project/logwatch/logwatch-7.4.0/logwatch-7.4.0.tar.gz

    3. Unpack the archive that you downloaded in step 2.

      tar xzvf logwatch*

    4. cd to this directory.

      cd logwatch[tab]

      [tab] means that if you press the tab key on your keyboard the name of the directory / file will be automatically completed for you. When using the console this saves a lot of time.

    5. Make the install file executable.

      chmod 777 install[tab]

    6. Run the install script.

      ./install[tab]

    7. Answer all questions with the defaults by pressing the enter key.
    8. The config is now to be created in /etc/logwatch/logwatch.conf
    9. Use the lines above to specify what you want to configure.

    alternatively, run the following command replacing it with your own Email address of course. This runs logwatch and does not read from a configuration file.

    logwatch –output mail –format html –mailto joe.bloggs@MadeUpCompany.com –archives no –range Yesterday –debug Med


  • VOIP over any network over a VPN.

    I just wanted to summarise what I’ve been doing over the past few days.

    Firstly: A quick overview of the systems I have running here:
    Almost everything is virtualized and runs under ESX4.
    I have a few Windows 2008 boxes. One of these acts as a domain controler, the other as an exchange 2007 server, another is for forefront, another is a SQL server, another is for backups and then another is for administration and the VPN. I have a number of Linux servers as well. One is for web hosting, another is for the firewall and routing and the last one is for Linux hosted Email and mailing lists.

    Over the past while I’ve upgraded the antivirus for Exchange and client desktops to Forefront 2010.

    I also installed Microsoft system center data protection manager 2010. That’s working particularly well. I’m delighted with it. It’s backing up Exchange and a few major file shares as well.

    Most recently I’ve been tightening up on security. I’ve added a VPN and blah blah blah. That’s not very interesting.

    What is kind of cool however is with the VPN, I am able to connect to my Trixbox server. Oh, I forgot to mention that one at the start of this post. This provides all my home phone connectivity. So, no matter where I am in the world, I can still receive calls received by my home phone number.

    That’s kind of cool isn’t it?


  • Configuring the firewall on desktops and servers side to allow DPM 2010 to push out the client.

    Very simply, there are pages out there that will tell you what ports are required for distributing the Data Protection Manager agent but the problem is they don’t tell you what i required on the client side.

    This command will open the fireall to allow the agent name. This is messy and not really what would ordinarily like to do as it doesn’t really allow for alterations to most hardware firewalls but for a normal active directory network set up it will work fine.

    netsh advfirewall firewall add rule name=”Allow DPM Remote Agent Push” dir=in action=allow service=any enable=yes profile=any remoteip=123.123.123.123

    Just replace the IP address with the one you have assigned to your DPM 2010 server.

    You should probably remove this rule from your firewall after this is done.


  • backing up to a remote server using scp and checking your results.

    As promised, Here is the next part of my series on backing up a remote Linux server.

    This script is still quite straight forward but on the up side, the more straight forward it is, the easier it is to troubleshoot if something goes wrong down the line.

    It does a few things. It downloads all the archives in the backup directory, checks that their downloaded and if the check is successful it runs a check to make sure there are no problems. If something has gone wrong, it is logged to a file matching that date with an extention of .err.

    #!/bin/sh
    thisdate=$(date +%Y%m%d)
    backupstatus=failed
    logdir=/home/YourUserName/backups/logs
    backupdir=/home/YourUserName/backups
    mkdir $backupdir/$thisdate
    scp YourRemoteUserName@IPAddressOfServer:backups/*.gz /home/YourUserName/backups/$thisdate/ && echo $thisdate files downloaded from server into $backupdir >> $logdir/$thisdate.log && backupstatus=success
    if [ $backupstatus=”success” ]; then
    ls $backupdir/$thisdate/ && echo $thisdate files are in $backupdir/$thisdate >> $logdir/$thisdate.log
    tar ztvf $backupdir/$thisdate/*.gz && echo $thisdate archive archive checked and decompress correctly. >> $logdir/$thisdate.log && backupstatus=success
    ls $backupdir/$thisdate/ && backupstatus=failed1
    if [ backupstatus = “failed1” ]; then
    echo $thisdate The files did not download >> $logdir/$thisdate.err
    else
    tar ztvf $backupdir/$thisdate/*.gz 2> $logdir/$thisdate.err
    fi
    fi
    thisdate=
    backupstatus=
    logdir=
    backupdir=

    As always, I like to clean up my variables. The declarations of these are at the top and the bottom of the script.

    In the middle is where the interesting stuff is.

    As in the last script, the command after the && will only run after the first command completes successfully. Therefore, it’s a great way of easily checking for the right exit status.

    So, when I run ls on the directory that should hold that nights backups, I’m validating the check done above that the download was indeed successful.

    The next check is much more important. It makes sure that the downloaded archives are readable. notice the t switch after the tar command. “tar -ztvf”. Again, if this is not successfull, the log won’t be updated and the variable continues to be set to success.

    Of course, if things fail, I want to know why! So, that’s where the next if block comes in. Instead of just writing success or fail status messages to the logs, it puts something meaningful into the error log. By piping the errors from the tar command, we’ll see what has happened. Is the file there, or is the archive corrupt.

    Of course, there’s one draw back to this. What happens if not all the archives are generated on the server side? Well, that’s where the logs on the server come in to play. It would be nice to have them all together in one place but that’s an easy enough job using a few other commands.

    In the next part of this, I will look at backing up indevidual MySQL databases.


  • using RSA or DSA for authentication to a Linux server via SSH or SCP.

    Following on from my post yesterday about backups, I thought I’d give a further explination as to how to copy down the archives that I created in the script.

    For this, I’m using SCP. However, if using SCP, you ordinarily need to log on.

    If your prompted for a username and password every time your script runs an scp command, it’s kind of pointless having cron run the script at all.

    So, to get around the requirement to log in, while at the same time keeping the set up secure, we use an RSA or DSA key.

    for the rest of this post, I’m going to call the machines backup and server. The backup is the machine I am copying the backup files to.

    On the backup machine, type the following commands to generate the files and copy the public file across to the server. I suggest you use a very restricted account on the backup and server for this.

    ssh-keygen -t rsa
    hit enter for the first question to agree to save the key to /home/YourUserName/.ssh/id_rsa
    Hit enter without typing anything for the second and third questions as we don’t want a password for this particular key. Note, this is usually not recommended but it should be ok for this type of situation.
    It will tell you that a public and private key has been created and it will give you the finger print of the newly created key as well.

    Next, you will want to copy the public key across to your server. Note, the server is the machine that hosts your backup scripts.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/

    If this is the first time you’ve used a public key then use the following command as it will make things easier for you.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/authorized_keys

    If however you have used other keys, do the following:
    ssh YourUserName@ServerAddress

    Type your username and password to log in.

    Now, type the following to append the id_rsa.pub to the authorized_keys file.
    echo .ssh/id_rsa.pub >> .ssh/authorized_keys

    Now, leave the ssh session by typing exit.

    From the backup machine, you can now log in via ssh without providing a password.

    Note!!!

    You might want to secure your public key. If it goes missing, this could go very very baddly for you as this key does not require a password.

    Log into the server by typing:
    ssh YourUserName:ServerAddress

    Now, change the permissions of the file so that this restricted user account is the only one with read and write access to the public key
    chmod 600 .ssh/authorized_keys

    Now, get out of the ssh session by typing exit.

    The next step will be running scp to download your backups and verify that their readable. If their not, we’ll want to log the failure.