• Category Archives Linux
  • Technology » Linux
  • Using the Tilda terminal in Linux with full accessibility for Orca users.

    This post was origionally written on friday the 29th of February 2008 however over the past few years it got lost due to blog upgrades. Because I’ve noticed a few people looking for this information I thought it would be a good idea to post it again.

    Yesterday, I decided to play around with a package called Tilda.  Tilda is a graphical console for the Gnome desktop.  It runs on KDE as well but its GTK based.  The main advantage it gives is more bells and whistles for people who like visual effects.  No, I’m not in to visual effects for obvious reasons however I was curious and I like the speed that it launches at


    After installing it yesterday, I was very happy to see that Orca worked with it right away.  When I ran Tilda for the first time, I was given a configuration wizard screen.  Orca spoke all of the focusable objects as if they were made for each other.  In the terminal it’s self, flat review could be used to read the console as you would expect with any accessible application.  Only problem was that Orca didn’t automatically speak new text as it was written to the screen.


    To try to rectify the situation, armed only with my Windows screen reader knowledge and my curiosity, I renamed the gnome-terminal.py file to tilda.py.  That didn’t do anything for me.  However, thinking back, I wonder if it didn’t do anything for me because I didn’t restart Orca first before trying tilda again.  My thinking behind this attempt was that Windows screen readers such as Jaws versions before 7 and Window Eyes used a script or macro type function that was more or less tied to the executable of the application.  For example, if notepad.exe was run, Jaws / Window eyes would run the settings / scripts for that file if it found a file named notepad.jsb or notepad.001.   This has changed in later versions of Jaws and Window eyes however I assumed that it was possible that the logic was similar in Orca.    That didn’t work though so I sent a brief email to the Orca discussion list asking for their suggestions. 


    Rich Burridge, an Orca developer, took some time out of his busy day to help me.  With some research, he determined that Tilda actually used VTE (Virtual Terminal Emulator) This is also used by Gnome-Terminal and has a lot of accessibility support already.  This meant that it was probably fine to use the Gnome-terminal script as it would most likely behave the same.  Only one small change was required.  He suggested that I add a few short lines to my orca-customizations.py file.  Look at the end of this post for the specific code.
    I want to take this opportunity to describe to you how Gnome accessibility differs from that provided by windows screen readers as in Windows, just copying this script from one application to another expecting it to behave the same would be completely unheard of.  Windows screen readers provide accessibility in windows. In Linux”, it’s gnome that provides its own accessibility.  Orca takes advantage of this and provides output customized to ensure that users receive the information they need in a way they can understand.  That’s the short version.  Now for some description. 


    In windows, if you are using a screen reader like Jaws and an instant messaging program like MSN for example, Jaws needs to monitor very high level behavior.  I.e, it needs to track changes to the interface, read text from the status bar, monitor the entire conversation history area and a lot more.  It does this to ensure you hear status updates, incoming messages, Contact information and of course, at times, it needs to keep track of your own actions so it can tell you where you are in any given window.  Most of this information is obtained by analyzing the interface.  Only a very small percentage of what Jaws gets from windows is obtained from information that the application or operating system gives it.  In other words, MSN does not communicate with Jaws to tell it that a new message has arrived.  Jaws determines this by watching for changes on the screen.


    Gnome on the other hand is completely different.  It provides assistive software such as the Orca screen reader with information so that it can relay this to the user.  In the gnome messaging client, pidgin, Orca is informed when a new message is sent to the message history window.   It then has events determined by scripts to tell it what to do with this information.  So, it doesn’t matter how you have pidgin configured, it will still send this information to Orca which in turn will relay it to the user.  So, bringing it back to the terminal, it doesn’t matter that Gnome Terminal is completely different to Tilda.  Tilda uses different colors, different positioning and a lot of eye candy.  It really doesn’t matter though as it utilizes this VTE that provides the required accessibility information to Orca! 


    I should also say here that although my description of the differences between how Windows and Gnome behaves should be accurate, I can’t say it with full certainty.  I’m not a developer and if you are really interested in the low level workings of the Gnome window manager and how it provides accessibility, I’d suggest you look into subscribing to the orca mailing list.


    That’s all the background and descriptions out of the way.  If you’re interested in getting up and running with tilda and Orca, use the following instructions:




    1. Go into a terminal.

      1. Press alt and F2 when in the Gnome desktop.
      2. Type gnome-terminal
      3. Press enter.

    2. Install the tilda terminal.

      1. Type apt-get install tilda
      2. Press the enter key.  When prompted to confirm the package download and installation, type the letter y and again, press enter.
      3. Exit the terminal window.

    3. Instruct orca to run the gnome-terminal.py script when you run tilda.

      1. Press alt f2 to start the run dialogue box.
      2. Type gedit then press enter.
      3. Paste the below code into the editor.

        import re
        import orca.settings
        from orca.orca_i18n import _
        orca.settings.setScriptMapping(re.compile(_(’tilda’)), “gnome-terminal”)

      4. Save the document by pressing control and s.
      5. Exit gedit by pressing alt and f4.

    4. Run the tilda terminal.

      1. Press alt f2 to start the run dialogue box.
      2. Type tilda and press enter.


    You’re done.  You are now in the tilda configuration screen.  Configure the package to your own preferences then use the ok button to save your changes and start the tilda terminal.  This wizard will not be shown automatically again when you run tilda.  To bring up the wizard, type tilda –C in the launch application dialogue box accessible with Alt F2.


    I think that should be clear enough.  Any problems or questions feel free to leave a comment and I’ll try to get to them.


    My thanks to Rich Burridge who so generously helped with this.  Without his help I’d probably be working at this still.


  • Please use strong passwords.

    I go on and on about security and specifically password complexity but I should probably write something specifically about the strength of complexity of the passwords you choose.

    Lets first look at passwords you shouldn’t use: people, pet, book, film and place names are a massive no no. In fact, just don’t use any name. Their exceptionally easy to guess or obtain. Do not use dates of births, you’re lucky lotto numbers, your phone number or your house number. Again, you don’t want to make it easy for someone to guess your password. Even if they can guess some of it it will still make it considerably easy to hack. Finally, unfortunately, it’s no longer enough to just replace letters with special characters when writing words. For example, you cannot write the word Dublin as Dubl1n. Look up dictionaries are used by automated password hacking programs to check for this type of thing.

    There is one form of brilliant password but I’ll explain that to you in a moment.

    For a traditional password I suggest you use the following rules when creating one.

    • The password should be a minimum of 9 characters. Notice it’s not 7 anymore? Unfortunately, as password hacking programs evolve, the complexity and strength of passwords must evolve faster.
    • A password should contain a minimum of 2 uppercase letters, 2 lower case letters, 2 symbols and 2 numbers.
    • You should never write down your password.
    • You should change passwords every 30 to 90 days depending on the importance of the data or system you are protecting. For example, I change my main password manager’s password every 14 days. This protects my other passwords so it’s important that it’s regularly updated. I have a password that I use for my test Linux virtual machine. This is updated every 90 days because it’s not protecting any important data and it’s only connected to a hand full of systems.

    An example of a secure 9 character password is:

    2$Fwp%3wT

    I try to stay away from using symbols such as the at sign and the quotation mark because these can symbolise the end of a password in some systems so they may cause conflicts. Of course, I choose the characters in my password based on the application it’s protecting so that I have some way of remembering them. This might mean that for a Linux box running Fedora I start the password with a capital F. Of course, it goes without saying that I’m giving misleading information here as I’m not going to be stupid enough to give you a hint that would empower you to hack my passwords but the policy I follow helps me to remember my various passwords while being completely obscure to everyone else. The skill of creating highly complex passwords is something you learn over time. Everyone has their own technique, their own standards and their own way of remembering passwords. On the point of remembering passwords, remember there are applications out there specifically designed to help with this.

    Taking a step forward away from passwords, we have pass phrases. What most people don’t realise is that standard password fields generally don’t have a maximum limit. Or, if they do have a maximum size it’s about 250 characters. Why not use sentences or phrases instead of passwords. Of course, these phrases can’t just be words and names. That would become equally easy to hack all be it over a longer duration. That’s something I should probably mention. The longer your password, the longer on average it takes for a password hacking tool to determine what it is. Therefore a pass phrase should cause password hacking tools to take much longer to hack your account. The longer it takes to hack an account the more likely it is that the systems intrusion protection system or firewall will recognise the attempts and block the offending systems IP address.

    Good pass phrases will be a sentence that include as many letters between A and Z as possible. Of course, like passwords, it’s great if you can add in a few capital letters, numbers and special characters.

    For example, a great pass phrase is something like this:

    The big brown dog jumped over the lazy fox.

    Written in a strong pass phrase this would become something like:

    Th3 b!g Br0wn D0g Jump3d 0v3r Th3 l@zy F0x.

    Ok. I’m replacing letters with symbols and numbers here. That’s not always a good idea but it at least gets us started.

    I use a pass phrase like this for almost every important system that requires a password. So should you!


  • Planning and updating.

    I’m still mulling over a few ideas that will completely change the way this blog is used and many of these ideas will likely result in the end of the blog as you know it. I’ve been held up though due to accessibility related challenges. I just can’t seem to find the right kind of software.

    There are quite a few things happening behind the scenes as well. Around this time last year, I changed from a VPS to actually hosting my own server. This year, I’ve purchased an even more powerful machine and the plan is to extensively update all of the software that is used for these sites, Microsoft Exchange, the VOIP server, the backup server and the file server. With any luck and probably a lot of money, I’ll be able to expand the technology I use to really take advantage of the high availability, clustering and fault tolerance that is available in many of these systems. This will mean that I should be able to sleep at night without worrying about a single patch bringing down the entire system!

    I’m thinking of changing from internal SAS based storage to Network attached storage to reduce costs but increase the overall capacity of the file server. At the moment I’m running very low on space because for every file, Email, website or voice mail that is written to a disk, it’s also written to a backup server. This means when I buy one 300GB SAS disk, I need to buy a second just for backups. Even with compression backups take up a considerable amount of space on my network.

    I’ve been connecting my VOIP PBX to Blueface, a VOIP phone provider in Ireland. They are incredibly reliable and their prices are very reasonable. I couldn’t be happier with their service but one of the reasons that I do all of this stuff is to be able to learn in an environment that isn’t pressured. It might be time to look at alternatives just to have the experience of connecting to different services. To that end, I’m looking into connecting a Skype account to the VOIP server. This may or may not have any benefits. I think it will be cheaper to buy other international numbers and it might allow for connectivity with Skype computer to computer calls but at the moment the idea is in its very early stages.

    The other thing I need to think of is ongoing costs, cooling and noise. I’d love to run two servers in parallel but this is a costly hobby. The price of electricity is not something I need to be too concerned with at the moment but if I add another server into the mix it will increase by about forty or fifty Euro a month. That’s not something that I can really justify. I’m thinking of a few alternatives to get around this while still having reasonably high availability. The first possible solution is to get one server fully set up. Buy a NAS box with about 10TB of storage and set a backup job to copy a snapshot of each virtual machine to this. The file server will also be based on this NAS so if the main server goes down it should be possible to bring another server into the mix very quickly. The other server will be set up with the same virtual host software. It’s most likely going to be HyperV. I’ve based the virtualization on ESX over the past year or two but I’d like to get more exposure to HyperV so it’s worth a shot for a while. I’ll segregate this server off onto a private network with only one connection for restores from the Network attached Storage (NAS). Every week or two, I’ll power on this server and restore the virtual snapshots onto this second server. With a bit of testing I’ll be able to ensure that the restores have worked. Because they’ll be on a private network they won’t have any impact on the live network. The result is that if the main server goes down, I’ll be able to bring up a second server instantly or if it’s crucial that it has the most up to date data then it will be up after a few minutes when the snapshot has been restored. Assuming wake up on LAN works on these network interfaces I should even be able to start this second server remotely and restore the snapshots easily.

    It would of course be much nicer if I could cluster both HyperV boxes with Microsoft’s version of VMotion so that if the first server went down the system would automatically fail over to the secondary server. That’s probably not going to be possible though.

    The second consideration is heat. Servers generate heat and in turn use more energy trying to cool down. Of course, in a perfect server environment an array of air conditioning, dehumidifiers and fresh air vents would be used to keep the environment at a perfect level for servers to run effectively but that’s just not an option in my kind of environment. For god sake, I’m running them in my house! At the moment, I have a specific location where all the CAT5 cable is patched back to. This works quite well with a single server but there are still occasional problems with heat and air flow. I have a plan that will greatly improve the situation but it has taken a long time for it to happen. Again, it’s all money. Basically, I’ll be moving the servers out to a shed that’s attached to the house. This is easy to reach via cable and with some work, should be reasonably easy to ensure a consistent temperature and reasonably good air flow and humidity. The worry is that it will get too cold during the winter so some insulation is required before I proceed.

    By moving the servers out here it will also help with the noise issue. After years of listening to computers all day I have almost filtered out the noise however I’m aware that it’s not a comfortable situation for some people to be in when a server is quietly humming away in a house.

    So, there you have it. For all of you who think I’m insane, you’re absolutely right however, even insane people often have perfectly logical reasons for their actions. For me, working on this kind of thing at home allows me to take full control of the set up, configuration and support of all of these systems. This gives me a great understanding of how it all works. With any luck, when I go for promotions in my current job or in years to come, when I look for a completely new job then I’m hoping it will stand to me. I’m also incredibly lucky but also very unfortunate with the environment I work with every day. It’s very diverse and complicated. Because so many people depend on it there are tools for managing and monitoring everything. This means that if functionality is needed, a hugely complex enterprise tool can be found and implemented. This slightly spoils me. It means that I don’t really have to think of ways of stitching things together or making work-arounds to make systems communicate with each other. If I knew I was always going to work in this kind of environment where anything is possible then I’d be perfectly happy with this. However, things might change. I might eventually work in a much smaller company where tools like SCCM, SCOM, Netbotz, What’s up gold and even backup exec or data protector simply cannot be afforded so scripts and free applications need to do the same job. I think it’s important to show that I’m just as comfortable with the small environments as I am with the enterprise level systems.

    The other side of it is that by working independently on different systems I get to find accessibility problems in my own time. More importantly, I get to solve these accessibility problems in an environment that isn’t pressured. I can then bring these solutions with me into work and apply them when their needed. It’s very important to me that I do not let an accessibility related problem get in the way of me doing my job independently and efficiently.


  • I’m looking for a knowledgebase.

    I wonder. Is this too much to ask?

    I’m looking for a free knowledgebase. It doesn’t have to have loads of bells and whistles but it needs to be able to do the following.

    • Allow attachments in word format.
    • Have a permissions based approach to providing access. Something as simple as a groups and users model would be fine.
    • It needs to have the option of classifying documents in terms of public and private and / or released, reviewed or in progress.
    • A reasonably good search facility is also necessary. At minimum it should be possible to organise documents by category. For example: active directory, mail, proxy etc.

    Nice features to have would be:

    • Authentication via active directory.
    • Secondary authentication byIP.
    • Usage reporting.
    • Automated notification to reviewers when documents are waiting to be reviewed.

    I don’t think that’s asking all that much.

    At the moment all the documentation is sitting on a file share. It’s organized using folders but all the documents have fully utilized properties therefore in a perfect world it should at least be possible to sort and view them by author, subject etc. I’m surprised that with the libraries in Windows 7 this isn’t already possible. To start with this kind of solution would even be nice compared to what we have at the moment.

    Any suggestions?


  • Messing with Syslog servers.

    In work I have been trying to fix the implementation of a syslog / eventlog server that currently runs on a Windows 2003 server. It’s a very nice product called event log analyser by Manage Engine.

    I’ve had issues with the database. Not due to the software but due to my lack of understanding of how it was configured. Unfortunately the person who set this software up is no longer with the company that I work for and it looks like although he has made the application very secure he has not documented his work. This is probably one of the most under rated but most important responsibilities of a system administrator. IF your setting up a new system or even if your just making a change to the configuration of a system it needs to be documented. At most you will ensure the person succeeding you will be able to take over where you left off but at minimum you’ll remind your self what you did a few months later when you have to look at it again.

    The event log analyser that we use looks like it would run on Linux more efficiently. The Windows server it is currently installed on is using more resources on keeping the operating system running than keeping the application performing well. I also prefer this kind of thing running on Linux because it’s rock solid and in the unlikely event that something goes wrong the logs are usually much more comprehensive and easier to read than those found in windows.

    The problem with the event log analyser running on Linux is that it requires a Windows event log forwarder on each monitored system running the Windows operating system. As this organization primarily uses Windows this is a bit of a chalange. Of course, if I found a good event log forwarder that ran as a service and could be configured remotely then I’d be fine because using either SCCM or group policy I could easily deploy it to all servers in the estate. With a bit of research I found that using event mon from monware will do everything I need. It runs as a small service requiring no user intervention during installation and it can be configured via the registry. This registry configuration can be exported by the eventmon client and then distributed via group policy or SCCM so it would be really nice to get this running. Unfortunately it involves a licence cost. As we’re already paying for the ManageEngine event log analyser this isn’t really a viable option. There is no way that I can justify my own preferences for the purchase of additional licenses when with a little more work I can get the event log analyser running on a Windows machine that will inherently support our Windows servers without the use of an event log forwarder. There is an appliccation out there called NT syslog however although this runs as a service and from my understanding it’s free, it doesn’t support windows 2008 servers and it’s no longer in development.

    There are a few things I don’t like about the event log analyser. Firstly, it looks like it was made for Linux and just ported to Windows as an afterthought. There is no real user interface on the windows side of things. Of course the event log analyser comes with a really great web interface but when trying to troubleshoot why the application isn’t connecting to it’s proprietary and cut down version of MySQL it’s very difficult to see how it all fits together. There are bat files that expect arguements when run from a command line however there’s no documentation of these arguements. when I’ve tried to guess them the output I get is far from descriptive. There are also scripts and exedutables everywhere and very little documentation of anything outside the web interface.

    I love syslog servers. The ability to see all the event logs at a glance and report on the top errors and the top error generators is a fantastic facility. Especially when administering hundreds of servers. Unfortunately my experience with this type of server has been far from good. They usually have fantastic web front ends and terrible back ends / terrible documentation for the back end or they have a fantastic back end but poor or limited functionality in the web based interface. I just don’t seem to be able to wind when using these products.

    Ok. I’m going to dive into this again.

    Hay, on the up side, while configuring the test Linux server yesterday I decided to install OpenSuSE 11.3. I hadn’t used OpenSuSe in a while so it was nice to have a look at the changes in it. To my delight and surprise it connected to the active directory instantly without any added configuration. This is a really nice improvement. I hope that other distributions of Linux follow this example. It would be nice to have one set of credentials for all systems.


  • Install Ubuntu Linux from USB.

    This is actually quite easy.
    Follow the below steps.

    • In Linux go to a terminal and type the following
      sudo apt-get install usb-creator
    • Type your root password.
    • Insert and mount a USB key. In the majority of times the USB key will automatically be mounted by the system.
    • Go to the system menu, then to administration and then finally to start up disk creator.
    • Point the creator to the ISO that you’ve downloaded.
    • Point the creator at the USB key that you want to use. Be aware that you do not need to use the erase button. I did this thinking that I needed to first remove the content on that disk before the process could continue however what it actually does is unmounts the partition. This is not what you’d really want.
    • Now just use the button called Create start up disk and wait for the process to finish.

    That’s really all there is too it.


  • Choosing a password manager and getting the PHP Password Manager installed.

    An update to this post detailing the KeePass password manager is available.

    I have been searching for a decent password manager for ages. Ideally I’d love to be able to use Network password manager as from using that in work I know that it’s a really small and fast application that integrates with active directory easily and provides some really nice search functionality. I was looking for something that would accept authentication from multiple users and would also store license files. Network password manager is really the best option. The problem is, it’s far too expensive to justify the cost.

    When I couldn’t find a decent installable application that I could access from any windows PC that will access passwords from a central location I started to look for web based applications. There are some great applications out there but none of them were secure enough or provided the right level of encryption. Passwords even if their just for websites are probably your most important asset when your online a lot.

    After a bit of digging I found PHPPasswordManager. and USB password manager . I was almost willing to consider having to bring a pen drive everywhere with the USB password manager on it but knew that at some stage I wouldn’t have it with me when I needed it most. PHPPasswordManager seemed to be the best bet. It didn’t have everything I wanted but it was simple, lightweight and fast and it wouldn’t take all that long to get running.

    In the end, I decided to go with PHP Password Manager as it encripts passwords before sending them to or from the server and the user interface is very clean. It required a bit of work though.

    I have customized this web application extensively in a very short time so that the interface provides the information I want at the top, the help information at the end of every page is hidden and only shown if or when I want it and I’ve replaced some of the buttons such as configure and add with links to make it easier to jump to them very quickly.

    Most importantly, after installing the PHPPasswordManager, I found that its authentication wasn’t as good as I thought it was going to be. When a user visited the url they could see all of the accounts that had passwords associated with them. This isn’t all that bad. With some cryptic names it could be hard to determine what systems the passwords were for and of course, the passwords can only be unlocked with the master password however this was still a concern. So, I have password protected the directory that this site is in and I only accept log ins from one account. These details are sent using Digest authentication to add more security.

    The following summarises the steps I used to install PHPPasswordManager

    1. Download the .gz archive to your Linux box by visiting the URL:
      http://sourceforge.net/projects/phppassmanager/
    2. Extract the archive using
      Tar xzvf phppassmanager*
      when in the directory containing the downloaded file.
    3. Navigate to the install directory:
      Cd phppassmanager*/install
    4. Create the database:
      echo “create database passwordmanagement” | mysql -u username –password=password
      Replace the username and password with one with the required privlidges to add databases.
    5. Add the tables into the database:
      mysql -u username –password=password phppassmanager < tables.sql
      Again, replace the username and password.
    6. Using PHPMyadmin, create a new account and give it access to the database we have just created.
    7. Edit config.php and change the username, password and database to provide the information you have just added.

    Create a new virtual directory for this. You can most likely past the following into /etc/apache2/sites-available/default

    Alias /passwords “/home/web/phppassmanager/”

    Options Indexes MultiViews FollowSymLinks
    AllowOverride AuthConfig Order allow,deny
    allow from all

    Obviously, it goes without saying that you will need to change the paths etc in this to reflect the structure of your file system.
    Now reload your Apache2 config.
    /etc/init.d/apache2 reload
    Navigate to yourdomain/passwords in your browser.
    The password manager should be shown.

    Now, lets harden the configuration a little bit.

    1. Within /home/web/phppassmanager or where ever you have left this directory, you will see a directory called install Rename this to TMPinstall. This can be deleted at a later date. Leave it there for the moment in case you need it in the upcoming days.
    2. Now, lets password protect the directory.
      htpasswd -c /etc/apache-passwords YourUsername
      Replace YourUsername with what ever name you want to log in with.
      You will be asked to enter your password twice.
    3. Enable the Auth_digest module:
      A2enmod auth_digest
    4. Restart Apache2.
      /etc/init.d/apache2 restart
    5. Use nano or your favourite text editor to create a .htaccess file:
      Nano /home/web/phppassmanager/.htaccess
      Remember to change the path to reflect your own set up.
    6. Paste the following lines. Take care to change the path to the password file and change the username as well.
      <
      AuthType Digest
      AuthName “Restricted Files”
      AuthUserFile /etc/apache2-passwords
      Require user YourUsername

    That’s all there is too it.
    Go to the configure button and start making groups.
    Add passwords.
    It’s all very easy after that.

    This set up has a major limitation. It doesn’t allow for multi-user environments but for what I need right now, it will do… Just about.


  • Logwatch in Debian Linux.

    Installing Logwatch is very straight forward and it’s definitly worth taking a few minutes to do it. The format that it can send your system logs to you in is so nice and easy to read you’ll wonder how you ever kept track of your server without it.

    I like logs to be mailed to me every morning. These are the steps you need to take to get a similar report:

    1. Firstly run the following command to install Logwatch. I’m assuming you already have postfix and sendmail installed.

      apt-get install logwatch

    2. The config file you need to edit is located at:

      /usr/share/logwatch/default.conf/logwatch.conf

    3. I’d suggest replacing the following entries as follows:

      Line 35
      Output = mail
      Line 37
      Format = html
      Line 44
      MailTo = name@mydomain.com
      Line 45
      MailFrom = logwatch@mydomain.com
      Line 67
      Archives = No
      Line 70
      Range = yesterday
      Line 77
      Detail = Med

    4. Test your logwatch configuration by running logwatch on the command line.
    5. Create a new cron job to run this at 5:45AM every day. This is the time I generally get reports sent out. Backup jobs, Windows and Linux security and Logwatch reports are sent out during 5:30AM and 6AM so that things are spaced out.

      crontab -e
      45 5 * * * /usr/sbin/logwatch

    That’s all there is too it.

    Update on 27th January 2012

    Logwatch in some versions of Debian is slightly broken if you choose to format messages using HTML. To get around this you will need to download the package from source and install it. The instructions to do this are outlined below.

    1. Create a temporary directory to save the files to:

      mkdir /tmp/logwatch
      cd /tmp/logwatch

    2. Download the package from sourceforge by using the following command.

      wget http://ignum.dl.sourceforge.net/project/logwatch/logwatch-7.4.0/logwatch-7.4.0.tar.gz

    3. Unpack the archive that you downloaded in step 2.

      tar xzvf logwatch*

    4. cd to this directory.

      cd logwatch[tab]

      [tab] means that if you press the tab key on your keyboard the name of the directory / file will be automatically completed for you. When using the console this saves a lot of time.

    5. Make the install file executable.

      chmod 777 install[tab]

    6. Run the install script.

      ./install[tab]

    7. Answer all questions with the defaults by pressing the enter key.
    8. The config is now to be created in /etc/logwatch/logwatch.conf
    9. Use the lines above to specify what you want to configure.

    alternatively, run the following command replacing it with your own Email address of course. This runs logwatch and does not read from a configuration file.

    logwatch –output mail –format html –mailto joe.bloggs@MadeUpCompany.com –archives no –range Yesterday –debug Med


  • backing up to a remote server using scp and checking your results.

    As promised, Here is the next part of my series on backing up a remote Linux server.

    This script is still quite straight forward but on the up side, the more straight forward it is, the easier it is to troubleshoot if something goes wrong down the line.

    It does a few things. It downloads all the archives in the backup directory, checks that their downloaded and if the check is successful it runs a check to make sure there are no problems. If something has gone wrong, it is logged to a file matching that date with an extention of .err.

    #!/bin/sh
    thisdate=$(date +%Y%m%d)
    backupstatus=failed
    logdir=/home/YourUserName/backups/logs
    backupdir=/home/YourUserName/backups
    mkdir $backupdir/$thisdate
    scp YourRemoteUserName@IPAddressOfServer:backups/*.gz /home/YourUserName/backups/$thisdate/ && echo $thisdate files downloaded from server into $backupdir >> $logdir/$thisdate.log && backupstatus=success
    if [ $backupstatus=”success” ]; then
    ls $backupdir/$thisdate/ && echo $thisdate files are in $backupdir/$thisdate >> $logdir/$thisdate.log
    tar ztvf $backupdir/$thisdate/*.gz && echo $thisdate archive archive checked and decompress correctly. >> $logdir/$thisdate.log && backupstatus=success
    ls $backupdir/$thisdate/ && backupstatus=failed1
    if [ backupstatus = “failed1” ]; then
    echo $thisdate The files did not download >> $logdir/$thisdate.err
    else
    tar ztvf $backupdir/$thisdate/*.gz 2> $logdir/$thisdate.err
    fi
    fi
    thisdate=
    backupstatus=
    logdir=
    backupdir=

    As always, I like to clean up my variables. The declarations of these are at the top and the bottom of the script.

    In the middle is where the interesting stuff is.

    As in the last script, the command after the && will only run after the first command completes successfully. Therefore, it’s a great way of easily checking for the right exit status.

    So, when I run ls on the directory that should hold that nights backups, I’m validating the check done above that the download was indeed successful.

    The next check is much more important. It makes sure that the downloaded archives are readable. notice the t switch after the tar command. “tar -ztvf”. Again, if this is not successfull, the log won’t be updated and the variable continues to be set to success.

    Of course, if things fail, I want to know why! So, that’s where the next if block comes in. Instead of just writing success or fail status messages to the logs, it puts something meaningful into the error log. By piping the errors from the tar command, we’ll see what has happened. Is the file there, or is the archive corrupt.

    Of course, there’s one draw back to this. What happens if not all the archives are generated on the server side? Well, that’s where the logs on the server come in to play. It would be nice to have them all together in one place but that’s an easy enough job using a few other commands.

    In the next part of this, I will look at backing up indevidual MySQL databases.


  • using RSA or DSA for authentication to a Linux server via SSH or SCP.

    Following on from my post yesterday about backups, I thought I’d give a further explination as to how to copy down the archives that I created in the script.

    For this, I’m using SCP. However, if using SCP, you ordinarily need to log on.

    If your prompted for a username and password every time your script runs an scp command, it’s kind of pointless having cron run the script at all.

    So, to get around the requirement to log in, while at the same time keeping the set up secure, we use an RSA or DSA key.

    for the rest of this post, I’m going to call the machines backup and server. The backup is the machine I am copying the backup files to.

    On the backup machine, type the following commands to generate the files and copy the public file across to the server. I suggest you use a very restricted account on the backup and server for this.

    ssh-keygen -t rsa
    hit enter for the first question to agree to save the key to /home/YourUserName/.ssh/id_rsa
    Hit enter without typing anything for the second and third questions as we don’t want a password for this particular key. Note, this is usually not recommended but it should be ok for this type of situation.
    It will tell you that a public and private key has been created and it will give you the finger print of the newly created key as well.

    Next, you will want to copy the public key across to your server. Note, the server is the machine that hosts your backup scripts.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/

    If this is the first time you’ve used a public key then use the following command as it will make things easier for you.
    scp .ssh/id_rsa.pub YourUserName@ServerName:.ssh/authorized_keys

    If however you have used other keys, do the following:
    ssh YourUserName@ServerAddress

    Type your username and password to log in.

    Now, type the following to append the id_rsa.pub to the authorized_keys file.
    echo .ssh/id_rsa.pub >> .ssh/authorized_keys

    Now, leave the ssh session by typing exit.

    From the backup machine, you can now log in via ssh without providing a password.

    Note!!!

    You might want to secure your public key. If it goes missing, this could go very very baddly for you as this key does not require a password.

    Log into the server by typing:
    ssh YourUserName:ServerAddress

    Now, change the permissions of the file so that this restricted user account is the only one with read and write access to the public key
    chmod 600 .ssh/authorized_keys

    Now, get out of the ssh session by typing exit.

    The next step will be running scp to download your backups and verify that their readable. If their not, we’ll want to log the failure.