• Category Archives Accessibility
  • Personal » Accessibility
  • Continuation of Mixing the old with the new. Nokia C5 and iPhone 4S.

    There were a few interesting questions and points made as a result of my post yesterday. Firstly, let me just remind readers that I love getting your Emails and phone calls but it would be nice if you would comment on the site instead of contacting me privately so as other readers can have the benefit of reading your questions and observations.

    Firstly, Jenny asked if the C5 has wifi. NO. It doesn’t although as I’m not using this phone for any data usage this actually makes no difference to me. I’m interested in what you might use wifi for though. Are there apps on the S60 platform that you would use?

    Nicky touched on the idea of using an iPod for listening to Music and using Apps. This is a very good idea. The iPod is smaller, lighter and cheaper and if you’re not using it for phone calls or texting then there’s no need for 3G. However, the iPhone battery lasts for a very long time when not used as a normal phone and there is nothing that the iPhone can’t do that the iPod does so there’s no need to change over if you already have an iPhone. Also, because I could potentially change back to the thinking that one device is just more convenient selling the iPhone would be a mistake because a 64GB iPhone 4S is not a cheap toy at all. I’ve already done this in the past. I moved back to a Nokia phone for a while about three or four years ago but after a while I missed the power of the iPhone so I went back again. However, at that time, I wasn’t running both phones simultaneously so things may be different this time. For me, the iPhone has almost become essential. I use Facetime with sighted people when I need something looked at, I use the many social networking aps to stay in contact with people, I read the local and national news, I keep up to date with Email and I even use it for GPS occasionally. However, I have a tip for you. I have a wireless Vodafone dongle. I usually have my laptop with me when traveling to and from work and this wireless dongle has a nice place in that laptop case. When I really want Internet access on the iPhone while traveling I just turn on that wireless dongle, connect to it from the iPhone and I have the same data access as I had when using it as a phone. Really, the only down side to this is that I have a few more devices to carry around. However, this is more than made up for by the efficiency of being able to make and receive calls and write text messages quickly and comfortably. I’ve been using this method now for just over a week and so far it’s working quite nicely. However, ask me again in a month. Maybe by that time I’ll be tired of carrying an extra phone around with me.

    Just one more note. I have given serious consideration to an iPad or an iPad mini however as a blind person I simply can’t understand why one of these devices would be appealing to me. The larger screen makes absolutely no difference. Why not just use an iPhone or an iPod. The iPad mini feels lovely and sexy. It’s slim, curved and light but once you get over that what’s the benefit if you can’t see the screen?


  • Mixing the old with the new. Nokia C5 and iPhone 4S.

    I’m sure you couldn’t care less what phone I’m using or why, but I want to explain something to you.

    I am now using a Nokia C5 for day to day phone needs. I haven’t completely moved away from the iPhone but for making and receiving calls and sending text messages there’s just no beating the convenience of a classic mobile phone. When I want to dial a number I simply key it in on the numeric key pad. When I get a text I can respond to it with one hand if I want to. When I’m looking for a contact I dial in the first few letters and it searches for it. Finding Frank for example takes me less than two seconds. Finding frank on the iPhone takes a lot longer.

    That’s not to say that I have anything against the iPhone or I have gone away from Apple products. I just got sick of fluffing around with a phone when all I wanted to do is answer or hang up a call. In fact, I’m going to get my frustrations out here by listing some of the things that are driving me crazy about the iPhone. Read on though. I’m going to also tell you why I carry an iPhone around with me as well.

    • When I hang up a call I should be able to press the power button but this only intermittently works. It is fixed in some updates but breaks again with the next.
    • Taking the iPhone away from my ear causes it to go to loud speaker. I know this is by design but it’s irritating.
    • A bug that has been on the iPhone since IOS4 has caused Voiceover users to encounter an issue where while on a call the phone intermittently switches back and forth to the loud speaker.
    • Texting on the iPhone on-screen keyboard is horribly slow, cumbersome, unproductive and difficult. Even Flexy isn’t great if you’re in a noisy area and you can’t hear the phone. Also, it’s badly designed when you’re holding it up to your ear to hear the text to speech synthesizer.
    • Bugs are frequently not caught or not fixed. For example, in IOS 6, Voiceover should speak new notifications when the screen is locked if the option is enabled but this no longer works. This senseless disregard of simple bugs has turned me off Apple to a large extent. In fact, because of this I recently sold my Mac book air.
    • The battery life is absolutely terrible. I charged my Nokia C5 on Sunday evening and I won’t need to charge it until tomorrow night. Imagine that. Three days of phone usage on one charge!
    • The iPhone is too big and it’s getting bigger! I don’t like the extra bulk of the iPhone 5. I also don’t like having to put a case on my phone. If it is vital to have a case on a phone to stop it from becoming easily damaged then the phone is badly designed.

    The iPhone is still brilliant. As I said before, I don’t want this post to seem like I’m gone against this product. I still carry one around with me and I use it when in range of wireless networks. I know you might think this is crazy and I would ordinarily agree with you but access to the Internet and apps simply can’t be rivalled by any other phone. The iPhone has more apps than any other platform and with thanks to the voiceover screen reader as blind people we have the benefit and luxury of having access to the vast majority of these. It’s a fact that I simply wouldn’t want to do without the connectivity provided to me by the iPhone however again, as a simple phone and text utility the iPhone has a long way to go before it is efficient in comparison to classic mobile phones. In fact a few people have commented that call quality is clearer when I speak to them from the Nokia C5 and I also find that I can continue a conversation for longer when traveling home by train than I can when using the iPhone.
    I have examined other platforms however although I think they have a lot of merit for most mobile phone users, they unfortunately can’t compete with the accessibility of the iPhone. Specifically Android, Blackberry and Windows phone. The Android platform has a screen reader and it is making slow and steady progress. I would like to see this reach the point where it can meet the expectations of usability and efficiency set by the iPhone. The Blackberry platform has also improved recently but the stability of the screen reader on this platform doesn’t seem to have lived up to the hype. Finally Windows phone. Ah, good old Microsoft. No accessibility for blind users at all. There’s absolutely no screen reader on this platform. I can only hope that they’ll fix this soon because I actually like what I’ve read about this platform so far and I have really enjoyed using previous versions of Windows mobile. I know that since 7.5 the platform has changed substantially but I loved the interconnectivity between the mobile and desktop platforms.

    I want to say something to you about Windows Mobile for a second. In the nineties Microsoft launched a mobile platform. The user interface was based on the PC desktop. This idea was a complete disaster. Microsoft had to completely change their approach to Windows mobile to win any kind of market share. It was acknowledged that the expectations and requirements of users were vastly different for both platforms. This bought about the lovely idea of the today screen that we have enjoyed on Windows mobile for about ten years. In Windows 8 and Windows mobile 8 this today screen has become much more powerful with its evolution into the start screen. In 2012 Windows 8 for the desktop and laptop has taken on a look and feel similar to Windows mobile. About twelve years on from the catastrophe that was Windows CE for mobile devices with its user interface based on the desktop version of Windows we now have Windows 8 for the desktop based on the user interface on mobile devices. So, I have two questions for you. Is Microsoft looking at another disaster or do users really want this new and improved today screen on their desktops. I’m not sure. For me, I wasn’t too happy with Windows 8. I found that even after customization of the environment it was still trying to push its own objectives onto me. Use Microsoft services for sign on, cloud storage, search, mail and chat. Of course they can’t be anti-competitive so alternatives are available but it’s easy to see what the preference is. Your thoughts are welcome.


  • Window Eyes verses Jaws?

    It’s that time again.
    Do I spend €445 on a Jaws upgrade and another SMA or do I move to a rival screen reader. Really, Window Eyes is the only application that comes close to competing with Jaws in my experience so it is the only one I am considering. Hal by Dolphin is just so far behind that I haven’t given it a second thought. This is just my opinion though. If your going through a similar decision then I encourage you to keep all of your options open.

    I’ve downloaded a demo of Window eyes and I’m currently running it through it’s paces. I’ll have to blog about this in more detail but right away, I miss some of the more advanced features of Jaws that don’t just make applications accessible, they make applications more intuitive and more efficient. When I talk about access, I don’t just need the basic screen reading functionality, I need an application to assist me in accessing data as quickly as possible.

    I like some features of Window eyes though. For example, the open scripting framework allows for standard development languages to be used. This is a major selling point.

    I’ll write about this in a little more detail over the next few days I hope.


  • Thought’s about the Mac; post 2 – Tips and tricks.

    It’s funny how easy it is to get motivated to write a new blog post when using such a comfortable keyboard. Sorry, Emma expects that I’ll mention the keyboard during every Mac related post… I aim to please. 🙂

    I’m learning a lot about the Mac and the way OSX does things every day. I thought I should list some of the little tips and tricks I’ve picked up. I should give credit where credit is due of course. A lot of what I learn is shared freely by the Mac users on Twitter. Without them I think I would have found this process particularly difficult.

    I’ll break these down into a list.

    • When using the YoruFukurou Twitter client with Voiceover, you will find that if quick nav is turned on replying to tweets can be a little hit and miss. Pressing enter on a tweet may result in the wrong name being added to the text field. The very simple solution to this is to turn off quick nav while in this application. This actually has the effect of making navigation around the various tables, tabs and edit fields much easier.
    • In mail, voiceover tells you that a conversation has a number of unread messages. Again, when using quick nag expanding the is conversation to read the messages in it is not as straight forward as you might think. You have to interact with the message, find a particular graphic and hit VO space to activate it. Again, it’s one of those situations where the message table works best if you turn off quick nav. To do this, press the left and right arrows together. Then you can expand the conversation by pressing right arrow. For some reason, Voiceover isn’t particularly responsive when reading messages and at times, if a conversation is collapsed it can fail to read anything at all. This could be something I’m doing wrong though. Of course, any comments regarding this or any other Mac post are more than welcome.
    • The widget area is cool! I’m still getting to grips with it but from what I’ve been able to figure out so far, widgets are reasonably accessible for the most part. I’m still looking for a nice RSS reader though but I’m sure I’ll find one eventually. I think the widget area is easier to use when the trackpad commander is turned on. Double tap the right side of the track pad to bring up a list of widgets. Configuring some of the widgets can be a little hit and miss but it’s certainly possible given some time. For example, the weather widget let me configure my locality but the done button wouldn’t work when I used the arrows in conjunction with the VO modifier to navigate to it. I found that I had to delete the county and country from the text box and re-add it again. This time, instead of using the arrows, I tabbed over to the done button and hit VO and space to activate it. I have no idea why but this works perfectly every time. By the way, when I say I hit VO, I mean that I am using the standard Voiceover modifier keys. These are control and option. Yes, unlike Windows and Even Linux screen readers that have just one modifier button, Voiceover has two that need to be pressed together. In my view, as a beginner I should add, this complicated modifier is just the start of what is one of the most mind bending keyboard command structures I have ever had the misfortune to come across. Seriously, I don’t know what the person who came up with these keyboard commands was smoking but it must have been some powerful stuff!
    • To bring up the notification bar, swipe with two fingers from right to left starting at the very edge of the track pad. Swiping from the middle or more accurately, not swiping from the very edge of the trackpad will cause voice over to stop interacting with the current control. I like this actually. To get out of the notification area, either scrub the trackpad with two fingers or press escape. Both actions do the same thing essentially.
    • To get the number sign, press command and 3.
    • To get the Euro sign, press command and 2.
    • In the menu extras area, you can’t just press space when using quick nav like you could in previous versions of OSX. I know that it was called something different in previous versions as well but it’s basically the same thing. I don’t know why they have broken the convention in this single area. It’s actually a bit frustrating. Anyway, as I’m sure you know, either press VO and space to activate the item or use the and down arrow combination.
    • Don’t underestimate the power of the Voiceover help. Press VO and h to launch this. Using the commands help is brilliant when getting started. Not only will it list the commands, you can press enter on one of the commands to perform that action. One of the commands I find most useful is in the general menu. It’s “Bring window to front.” So many windows are launched containing system messages but for some reason you can’t set focus to them using command and tab. Pressing VO, shift and F2 usually does it though. Oh, that brings me onto another irritation while using the Mac. On the Mac book pro and the Mac book air, a number of Voiceover commands such as this also require you to press the function key as well as for example, VO and shit. HOwever, it doesn’t actually say this in the commands list. It’s annoying to think a command doesn’t work only to find that it’s one of the few that requires the function button for some reason. I’m not even sure why! Almost all of the apple keyboards are the same. Why it would require the function key is a bit of a mystery to me.

    Ok, my bus is getting to it’s stop so I have to go. Let me know if you have any questions or better yet, any suggestions.

    Until next time!


  • Thought’s about the Mac; post 1

    Sorry for not posting yesterday, I’m still getting use to this Mac. Because OSX is very different My plan is to document my progress as I learn more about how OSX does things. of learning an entirely new screen reader at the same time. Because of this, a lot of my findings are from the perspective of a blind MAC user therefore unfortunately may not be as interesting to the sighted readers of my blog. However, stick around, you might learn something… It’s a huge learning curve and it’s compounded by the necessity of learning an entirely new screen reader at the same time.

    As I was saying before, I’m comfortable with some of the more administration type tasks required on the Mac such as joining them to Active Directory, configuring group policy for them, installing different AntiVirus etc but actually using it from an end-users perspective was completely new to me. Fortunately I’m not starting at square one though.

    Right, lets get started. The first thing I do when getting comfortable with a new system is install the applications that I use the most for day to day life. That’s a calendar, notes, Email, Twitter and some kind of text editor. A few years ago in the Linux world, I recorded a lot of audio tutorials to assist users with these tasks but I’m delighted to say that I don’t have to this time. I’m coming to the Mac game later than others so a brilliant website is doing a much better job than I ever could. It’s AppleVis. Go over there and listen to some of their podcasts. I couldn’t recommend them highly enough

    Now that I have my mail, calendar, notes, text editor and twitter applications set up I am much more inclined to use the Mac over my PC. That’s not to say I think the Mac is better than the PC, I’m not sure about that yet but it means that I force myself to use the Mac to give it a fair chance.

    For mail, notes, calendar and text editing, I’m using applications that are shipped with OSX. Thanks to OSX 10.8 Mountain lion, integration with iCloud is stronger than ever so notes, reminders and even files are shared across devices almost instantly.

    For Twitter, I’m using YoruFukurou. Is that the right spelling? Ah who knows! I’m too lazy to go look. What kind of a name is that anyway? Don’t get me wrong though, It’s a brilliant application. Probably one of the best twitter clients I’ve ever used. The only thing I would say, and this is try across all applications on the Mac, consistency of keyboard navigation could do with some attention by the Voiceover developers. Sometimes quick nag is perfect, sometimes it’s absolutely terrible and actually causes applications to behave very erratically. In fairness to YoruFukurou the reason why this is such a brilliant Twitter client is it supports dozens of keyboard shortcuts making it very easy to reach almost every Twitter related task.

    One application I didn’t mention is for messaging. It’s called Atium. Overall, this application is very good but if comparing it to the usability and efficiency of using Windows Live Messenger, I have to say that it’s lacking a lot. For example, in Windows and Linux, when I get a new message I expect the screen reader to announce it automatically. On the Mac, everything is very manual. That would be fine but without some kind of feedback, messages can and will be missed.

    I will definitely blog in more detail about my experience on the Mac but I don’t want to make the posts too long. Come back again tomorrow. Hopefully I’ll have had time to write some more thought’s down.


  • Music or Technology.

    Work to live. Don’t live to work and equally, play to live. Don’t live to play. This is my new aspiration. I’m lucky. I love my job but lately I’ve been spending far too much time working and not enough time playing. Finding a better balance is something I need to prioritise sooner rather than later.

    I don’t mind saying that for a long time now I have been giving serious consideration to moving away from my career in computing into the life of a full time musician. It is a very attractive option but it would be a huge change with a lot of draw backs. My father once said that in his opinion. Being a musician carried limited opportunities for advancement. Once you reached a certain level there was no possibility for improvement. I’m sure he wasn’t just talking about musicianship, technical ability and skill. He was quite rightly pointing out that especially in Irish traditional music; there is a certain limit to the size of the proverbial ladders that someone and clime. Once you reach that peak there is nowhere else to go. In the Information technology industry, the ladder is much higher leading to many more possibilities for improvement, promotion and let’s faces it, increased remuneration. There is also a lot of competition in Irish music. That’s a great thing. Don’t get me wrong. It means that the quality of the music is constantly improving at a rate that can be described as nothing less than astounding. Just listen to the children being taught at the moment. Their incredible! It would mean that I would need to ensure that I actually practise once in a while though.

    Working with computers every day is posing its own set of problems. I am continually hampered by the fact that the assistive technology that I depend on so much is in a constant state of catch up with the rest of the world. Almost every new application that is released by Microsoft, Symantec, Mcafee, Trend micro, HP, Dell, VMware and IBM causes yet another problem for me as a screen reader user. It has reached the stage that I need to regularly enlist the assistance of users of iPhones and iPads who can take the time to talk to me over face time so they can see the errors on the screen when my assistive technology cannot read them. I ask you this openly. How can we expect employers to see us as having the same potential as people who have sight when a new application is released and a screen reader user can’t access over 50% of the interface. When you administer dozens of systems, how can your employer be expected to look the other way while you struggle to use simple parts of applications because the screen reader can’t read what’s on the screen. I am frequently in the position where it probably looks like I’m just being lazy or wasting time but in actual fact, I’m prolonging a particular job because during my spare time I’m trying to write a script for my screen reader to get around some strange application. Or, worse, I’m waiting until I can get someone to quickly let me use their eyes for 2 minutes. See, I’m stubborn. I hate admitting that I can’t access systems. I’d rather be seen as incompetent, lazy and slow before letting people see that I’m struggling with accessibility. It’s probably silly and without doubt a lot of you think I’m crazy. It’s probably also true that a lot of you are wondering why I’m writing so bluntly with a basic admission that I’m finding it almost impossible to do my job. Simple. I find it almost impossible but I’m absolutely completely committed to doing the very best I can and until that stops, I know I will succeed. That’s not me being over confident or having a big head, I’m simply saying that I can’t afford to give up. I spoke to someone on Twitter two weeks ago and although he doesn’t know me, he was able to see my frustration within ten minutes. I don’t know who this person is really. I don’t’ know how much experience he has or even where he’s from. He however said something that hit home to me. He told me to be careful. “Constantly fighting accessibility battles can very easily burn you out”. How true this is! I would bet that all my stress is caused by trying desperately to access systems. I’m quite good under pressure. Outages, major changes, upgrades and problems don’t bother me too much. They all have a logical solution and it’s not like with the day to day work problems come up every day. Accessibility or the lack thereof is just driving me crazy. I sometimes fear that I may have hit a glass ceiling in this type of work. I can see… excuse the pun… what’s above me and I know how to get there but I can’t get past the step that I’m on now. It’s not like I couldn’t go in and configure a Dell KVM. It’s not difficult. But, the interface is QT based so I can’t access it. See what I mean? That brings me to a very quick point that I wanted to make: This is not the fault of the assistive technology developers. If leading companies such as Microsoft do not use good design practises in all of their applications how then can we hope for any other company to? Making an accessible application is not difficult. It just takes consideration. That’s a topic for another day though.

    So, you can see my dilemma. Play music full time, earn less money with fewer prospects for promotion or enhancement or feel like I’m banging my head off a wall every day trying to make a square peg fit into a round whole. I’ve often said that I’m lucky. I love my job. I love working with computers but the more I achieve and the higher on that ladder I keep talking about I clime, the more I seem to hit this inaccessible wall.

    Another very important point to consider is, music is my escape. I heard it described yesterday as like jumping off a peer into a deep river. You start off in one world but when you hit the water you are in a completely different world with different rules, different movements, different priorities and different goals. Hold breath under water. Breathe while swimming. Use different muscles. Music is like this. While playing music the same rules don’t apply as working. It’s a very focusing activity. I would be very afraid I’d lose that escape if I played music full time. Where would I go to relax then? Back to a computer? Could the worlds work in reverse?

    My mother commented before that until I suddenly announced one day that I had made my mind up to go to college and study computing that she always felt that it was a fore gone conclusion that I would be a full time musician. That’s interesting isn’t it? Here’s a little known fact. Out of both my parents, I would consider my father to be the more musical one. So, that perspective surprised me a lot. In a way, I considered myself very lucky. By fourteen or fifteen, I knew exactly what I wanted to do. I even knew the course number. DK054.

    It’s an interesting question and an interesting choice. I don’t expect I’ll answer it any time soon. I’d still like your perspective though.


  • The accessibility of virtual desktops.

    This probably could be a much more scientific approach to a review or analysis of the accessibility of a Windows guest running on the ESXI hypervisor however, I don’t really have the time to write such a document at the moment. However, this will serve as verification to some that access to this environment is possible all be it in a limited way.

    For the less technical people out there, basically what I’m talking about here is running a Windows computer inside a virtual machine.

    You need a more basic description? No problem. Try this. Let’s say you have one large computer. Virtual machines are machines that run inside this big computer. Think about it as if it was a building. This building might have ten different companies. True, each company could probably have its own building but there’s no need. It only needs a certain amount of space. An entire building would be over kill. So, the one building hosts all of these guest companies. Just like one large server can host dozens or hundreds of virtual machines be those workstations that users work with or servers that run the companies IT systems. Having one building hosting all these smaller companies cut down on the space required the cost of maintenance and the cost of power. When you hear the word hypervisor, I am basically talking about the building or the large server that hosts all the virtual machines or companies. When I talk about a guest, I am talking about the companies in the building i.e, the virtual machines. Get it?

    • Building = Server / Hypervisor
    • Company = Guest or virtual machine

    Ok. I’m glad we have all of that cleared up. You can take a break for a few seconds before I move on to the next part because it’s going to get a little technical again. Don’t worry. You’ll understand it now that you have a grip of the basics.

    For one reason or another, I spent some time yesterday tackling the problem of how a blind person can independently and efficiently access a Windows 7 PC that has been virtualized using a thin client. A thin client for those of you who aren’t aware of the term is a basic PC. It has very limited storage, limited RAM and a low power processor. The idea of this machine is to give a user a platform from where they can access a virtual computer. All it does is start a cut down version of Windows and provide the user with a log in box to start their virtual system.

    There is one barrier to accessibility when using thin clients. No additional software can be installed ordinarily as there isn’t enough space to facilitate it. This means installing a screen reader isn’t an option. Even a pen drive version of Jaws won’t work because it requires the installation of a mirror driver. Fortunately, NVDA will work very well. Just download the portable version and run it. If I was to make one suggestion it would be to put NVDA to sleep automatically when the PC over IP or the RDP client started as it can get a little confusing when modifier keys such as caps lock are pressed. I know this can be done using scripts though and it is something I would look at doing if I was using this as my workstation every day.

    So, you can now use the thin client to log into your workstation. That’s the first hurdle out of the way. Now what?

    With VMware you can log onto virtual machines using two protocols. RDP which is Microsoft’s remote desktop protocol or PC over IP which is the protocol used by VMware. PC over IP is more efficient for a number of reasons but in later versions of RDP Microsoft have gained some ground. I won’t explain the benefits over PC over IP in this post but very quickly, PC over IP is less bandwidth intensive so the experience of remotely using a virtual machine is a little smoother.

    You’ll be happy to know that relaying sound back to the thin client is supported by both of these protocols however you won’t get instant feedback like you would if sitting at your own PC. The delay is in the realm of about fraction of a second but if like me you expect instant responses from a screen reader this fraction of a second may as well be an eternity.

    Relaying sound back to the thin client is very important. Jaws, my preferred screen reader crashes every time it is started in a virtual machine using the PC over IP protocol. Without fail, it refuses to run. NVDA on the other hand runs very nicely in a virtual machine using the PC over IP protocol. Of course, using NVDA sound mapping to your thin client is vital which is why I made the point earlier.

    Unfortunately, there you have it. What I’m saying in a very long winded way is, yes, you can access a virtual machine using a thin client if you’re stuck but I wouldn’t think it’s usable every day. The sound lag is just too pronounced. NVDA’s ability to work in this environment should however be recognised and commended. Jaws, a leader in screen reading software seems to fail badly.

    Please don’t’ take this as an endorsement or a criticism of any screen reader. I am simply stating what I have found to be the reality here. I have written this post to highlight this area and to show that improvement is required. More and more organizations and companies are moving to virtual desktops to replace physical machines as they provide significant cost savings. I have a genuine fear that assistive technology companies are not aware of this trend and blind computer users such as me will be left clambering to keep up with my sighted colleagues. I strongly believe that it is vital that companies such as Freedom Scientific, NV Access and GW Micro listen to users and when possible, utilize their experience and expertise. I for one offer it freely.

    Systems used are:

    • ESXI 5.0
    • VMWare view 5.0
    • Windows 7 X64 and 32 bit.
    • Thin client running a cut down version of Windows XP.
    • 1GB network connection.
    • Virtual machine had two processors and 4GB of RAM.
    • Thin client had 1GB of RAM and 1 processor at 1.5GHZ.

    I should finally note that I do not see RDP as a viable solution for accessing virtual machines using a thin client. Especially for screen reader users. If by some stroke of luck you get Jaws running on your thin client, you would then use Jaws on your virtual machine to tunnel the data back to your locally running instance of Jaws on the thin client. That’s fine, however, what if like me your a system administrator and you will need to establish connections to other remote systems from your virtual machine. You will not be able to use Jaws to establish a second or third connection as you are already using jaws through one RDP session. Drawing on an article from IBM this seems to be a viable solution for some researchers however from the perspective of someone who both administers and uses a virtual environment every day, I would not be able to depend on RDP due to this limitations. PC over IP is a protocol designed and optomized for he VMware virtual platform. We should be able to use it.


  • The future of browsing the web.

    When you’re a blind user of technology you are going to depend on a screen reader and it’s very likely you read the web the way I do ordinarily. From top to bottom and then from left to right. This is just how traditional screen readers on Windows, Linux and the Mac do things. Now, let me explain this to the sighted readers of this blog. Take the website TheRegister .co.uk This site has content arranged in columns and it’s very easy to glance through the headlines that are of interest. Almost at a glance you can pick an article and click on that page. Traditionally, a computer user who is blind utilizing a screen reader will need to navigate past the navigation links at the top, down by the search link, past the advertisements until she or he gets to the content. When he or she finds a page and navigates to it the entire journey starts again. I’m dramatizing it slightly to make a point. That is, web browsing for an individual using a screen reader is very linear. Over the past six or seven years the situation has improved steadily with screen reader makers developing shortcuts that allow navigation by heading, table, list, frame, paragraph, image, form element and other standard HTML elements. This revolutionized access to the web as sites that use decent HTML mark-up can be navigated easily by jumping past huge chunks of text.

    I think or rather, I hope a new revolution in web accessibility has been reached. It’s in the form of a device I originally publicly discredited as being nothing more than an oversized iPod. Yes, I’m talking about the iPad. I think this big touch screen is actually the most enjoyable interface I have ever used for browsing the web. It’s so nice to be able to explore the layout of a website. Getting a sense of where the navigation links are, where the content starts and where the form fields are located for example is so much nicer than remembering that to find the content on my favourite website, I press h three times to jump to the third heading then I press down five times to move past all the junk. Just like I assume a sighted person reads through the timeline on Facebook very quickly by glancing at specific parts of the screen, I can glance at different parts of the screen with my fingers. I know, it’s very different still but it is probably the closest I have ever been to actually reading a site in a similar way to sighted friends.

    It’s also a lot less keyboard commands to remember. For obvious reasons of course.

    I recently designedthe website for Computer Support Services from the ground up. Compared to the work of professional web designers, my attempt at design is basic at best but I’m quite proud of it. I regularly checked my layout using the iPad. Making sure I aligned things correctly was so much easier using a touch screen interface. I’d make a change to the style sheet and as soon as it was saved, I’d have a feel of the iPad to make sure I hadn’t broken something and then when I was happy that everything was still in the right place, I’d look for the new component that I’d added. For example, if you look at the twitter feed at the bottom right of the Computer Support Services website. I wanted to give that just enough space to let it stand out but I didn’t want to overwhelm the bottom of the home page. Finding that balance was made a lot easier by exploring the size of the section by touch.

    If you haven’t tried browsing the Internet using an iPad, I’d encourage you to give it a shot. If you tried it before and you aren’t convinced, spend some time with it. If you want specific tips drop me a comment.

    I should also mention that I’ve written this blog post using wordpress on my iPhone and I finished it using the iPad. The wonders of modern technology ay? 🙂


  • Using the Tilda terminal in Linux with full accessibility for Orca users.

    This post was origionally written on friday the 29th of February 2008 however over the past few years it got lost due to blog upgrades. Because I’ve noticed a few people looking for this information I thought it would be a good idea to post it again.

    Yesterday, I decided to play around with a package called Tilda.  Tilda is a graphical console for the Gnome desktop.  It runs on KDE as well but its GTK based.  The main advantage it gives is more bells and whistles for people who like visual effects.  No, I’m not in to visual effects for obvious reasons however I was curious and I like the speed that it launches at


    After installing it yesterday, I was very happy to see that Orca worked with it right away.  When I ran Tilda for the first time, I was given a configuration wizard screen.  Orca spoke all of the focusable objects as if they were made for each other.  In the terminal it’s self, flat review could be used to read the console as you would expect with any accessible application.  Only problem was that Orca didn’t automatically speak new text as it was written to the screen.


    To try to rectify the situation, armed only with my Windows screen reader knowledge and my curiosity, I renamed the gnome-terminal.py file to tilda.py.  That didn’t do anything for me.  However, thinking back, I wonder if it didn’t do anything for me because I didn’t restart Orca first before trying tilda again.  My thinking behind this attempt was that Windows screen readers such as Jaws versions before 7 and Window Eyes used a script or macro type function that was more or less tied to the executable of the application.  For example, if notepad.exe was run, Jaws / Window eyes would run the settings / scripts for that file if it found a file named notepad.jsb or notepad.001.   This has changed in later versions of Jaws and Window eyes however I assumed that it was possible that the logic was similar in Orca.    That didn’t work though so I sent a brief email to the Orca discussion list asking for their suggestions. 


    Rich Burridge, an Orca developer, took some time out of his busy day to help me.  With some research, he determined that Tilda actually used VTE (Virtual Terminal Emulator) This is also used by Gnome-Terminal and has a lot of accessibility support already.  This meant that it was probably fine to use the Gnome-terminal script as it would most likely behave the same.  Only one small change was required.  He suggested that I add a few short lines to my orca-customizations.py file.  Look at the end of this post for the specific code.
    I want to take this opportunity to describe to you how Gnome accessibility differs from that provided by windows screen readers as in Windows, just copying this script from one application to another expecting it to behave the same would be completely unheard of.  Windows screen readers provide accessibility in windows. In Linux”, it’s gnome that provides its own accessibility.  Orca takes advantage of this and provides output customized to ensure that users receive the information they need in a way they can understand.  That’s the short version.  Now for some description. 


    In windows, if you are using a screen reader like Jaws and an instant messaging program like MSN for example, Jaws needs to monitor very high level behavior.  I.e, it needs to track changes to the interface, read text from the status bar, monitor the entire conversation history area and a lot more.  It does this to ensure you hear status updates, incoming messages, Contact information and of course, at times, it needs to keep track of your own actions so it can tell you where you are in any given window.  Most of this information is obtained by analyzing the interface.  Only a very small percentage of what Jaws gets from windows is obtained from information that the application or operating system gives it.  In other words, MSN does not communicate with Jaws to tell it that a new message has arrived.  Jaws determines this by watching for changes on the screen.


    Gnome on the other hand is completely different.  It provides assistive software such as the Orca screen reader with information so that it can relay this to the user.  In the gnome messaging client, pidgin, Orca is informed when a new message is sent to the message history window.   It then has events determined by scripts to tell it what to do with this information.  So, it doesn’t matter how you have pidgin configured, it will still send this information to Orca which in turn will relay it to the user.  So, bringing it back to the terminal, it doesn’t matter that Gnome Terminal is completely different to Tilda.  Tilda uses different colors, different positioning and a lot of eye candy.  It really doesn’t matter though as it utilizes this VTE that provides the required accessibility information to Orca! 


    I should also say here that although my description of the differences between how Windows and Gnome behaves should be accurate, I can’t say it with full certainty.  I’m not a developer and if you are really interested in the low level workings of the Gnome window manager and how it provides accessibility, I’d suggest you look into subscribing to the orca mailing list.


    That’s all the background and descriptions out of the way.  If you’re interested in getting up and running with tilda and Orca, use the following instructions:




    1. Go into a terminal.

      1. Press alt and F2 when in the Gnome desktop.
      2. Type gnome-terminal
      3. Press enter.

    2. Install the tilda terminal.

      1. Type apt-get install tilda
      2. Press the enter key.  When prompted to confirm the package download and installation, type the letter y and again, press enter.
      3. Exit the terminal window.

    3. Instruct orca to run the gnome-terminal.py script when you run tilda.

      1. Press alt f2 to start the run dialogue box.
      2. Type gedit then press enter.
      3. Paste the below code into the editor.

        import re
        import orca.settings
        from orca.orca_i18n import _
        orca.settings.setScriptMapping(re.compile(_(’tilda’)), “gnome-terminal”)

      4. Save the document by pressing control and s.
      5. Exit gedit by pressing alt and f4.

    4. Run the tilda terminal.

      1. Press alt f2 to start the run dialogue box.
      2. Type tilda and press enter.


    You’re done.  You are now in the tilda configuration screen.  Configure the package to your own preferences then use the ok button to save your changes and start the tilda terminal.  This wizard will not be shown automatically again when you run tilda.  To bring up the wizard, type tilda –C in the launch application dialogue box accessible with Alt F2.


    I think that should be clear enough.  Any problems or questions feel free to leave a comment and I’ll try to get to them.


    My thanks to Rich Burridge who so generously helped with this.  Without his help I’d probably be working at this still.


  • Planning and updating.

    I’m still mulling over a few ideas that will completely change the way this blog is used and many of these ideas will likely result in the end of the blog as you know it. I’ve been held up though due to accessibility related challenges. I just can’t seem to find the right kind of software.

    There are quite a few things happening behind the scenes as well. Around this time last year, I changed from a VPS to actually hosting my own server. This year, I’ve purchased an even more powerful machine and the plan is to extensively update all of the software that is used for these sites, Microsoft Exchange, the VOIP server, the backup server and the file server. With any luck and probably a lot of money, I’ll be able to expand the technology I use to really take advantage of the high availability, clustering and fault tolerance that is available in many of these systems. This will mean that I should be able to sleep at night without worrying about a single patch bringing down the entire system!

    I’m thinking of changing from internal SAS based storage to Network attached storage to reduce costs but increase the overall capacity of the file server. At the moment I’m running very low on space because for every file, Email, website or voice mail that is written to a disk, it’s also written to a backup server. This means when I buy one 300GB SAS disk, I need to buy a second just for backups. Even with compression backups take up a considerable amount of space on my network.

    I’ve been connecting my VOIP PBX to Blueface, a VOIP phone provider in Ireland. They are incredibly reliable and their prices are very reasonable. I couldn’t be happier with their service but one of the reasons that I do all of this stuff is to be able to learn in an environment that isn’t pressured. It might be time to look at alternatives just to have the experience of connecting to different services. To that end, I’m looking into connecting a Skype account to the VOIP server. This may or may not have any benefits. I think it will be cheaper to buy other international numbers and it might allow for connectivity with Skype computer to computer calls but at the moment the idea is in its very early stages.

    The other thing I need to think of is ongoing costs, cooling and noise. I’d love to run two servers in parallel but this is a costly hobby. The price of electricity is not something I need to be too concerned with at the moment but if I add another server into the mix it will increase by about forty or fifty Euro a month. That’s not something that I can really justify. I’m thinking of a few alternatives to get around this while still having reasonably high availability. The first possible solution is to get one server fully set up. Buy a NAS box with about 10TB of storage and set a backup job to copy a snapshot of each virtual machine to this. The file server will also be based on this NAS so if the main server goes down it should be possible to bring another server into the mix very quickly. The other server will be set up with the same virtual host software. It’s most likely going to be HyperV. I’ve based the virtualization on ESX over the past year or two but I’d like to get more exposure to HyperV so it’s worth a shot for a while. I’ll segregate this server off onto a private network with only one connection for restores from the Network attached Storage (NAS). Every week or two, I’ll power on this server and restore the virtual snapshots onto this second server. With a bit of testing I’ll be able to ensure that the restores have worked. Because they’ll be on a private network they won’t have any impact on the live network. The result is that if the main server goes down, I’ll be able to bring up a second server instantly or if it’s crucial that it has the most up to date data then it will be up after a few minutes when the snapshot has been restored. Assuming wake up on LAN works on these network interfaces I should even be able to start this second server remotely and restore the snapshots easily.

    It would of course be much nicer if I could cluster both HyperV boxes with Microsoft’s version of VMotion so that if the first server went down the system would automatically fail over to the secondary server. That’s probably not going to be possible though.

    The second consideration is heat. Servers generate heat and in turn use more energy trying to cool down. Of course, in a perfect server environment an array of air conditioning, dehumidifiers and fresh air vents would be used to keep the environment at a perfect level for servers to run effectively but that’s just not an option in my kind of environment. For god sake, I’m running them in my house! At the moment, I have a specific location where all the CAT5 cable is patched back to. This works quite well with a single server but there are still occasional problems with heat and air flow. I have a plan that will greatly improve the situation but it has taken a long time for it to happen. Again, it’s all money. Basically, I’ll be moving the servers out to a shed that’s attached to the house. This is easy to reach via cable and with some work, should be reasonably easy to ensure a consistent temperature and reasonably good air flow and humidity. The worry is that it will get too cold during the winter so some insulation is required before I proceed.

    By moving the servers out here it will also help with the noise issue. After years of listening to computers all day I have almost filtered out the noise however I’m aware that it’s not a comfortable situation for some people to be in when a server is quietly humming away in a house.

    So, there you have it. For all of you who think I’m insane, you’re absolutely right however, even insane people often have perfectly logical reasons for their actions. For me, working on this kind of thing at home allows me to take full control of the set up, configuration and support of all of these systems. This gives me a great understanding of how it all works. With any luck, when I go for promotions in my current job or in years to come, when I look for a completely new job then I’m hoping it will stand to me. I’m also incredibly lucky but also very unfortunate with the environment I work with every day. It’s very diverse and complicated. Because so many people depend on it there are tools for managing and monitoring everything. This means that if functionality is needed, a hugely complex enterprise tool can be found and implemented. This slightly spoils me. It means that I don’t really have to think of ways of stitching things together or making work-arounds to make systems communicate with each other. If I knew I was always going to work in this kind of environment where anything is possible then I’d be perfectly happy with this. However, things might change. I might eventually work in a much smaller company where tools like SCCM, SCOM, Netbotz, What’s up gold and even backup exec or data protector simply cannot be afforded so scripts and free applications need to do the same job. I think it’s important to show that I’m just as comfortable with the small environments as I am with the enterprise level systems.

    The other side of it is that by working independently on different systems I get to find accessibility problems in my own time. More importantly, I get to solve these accessibility problems in an environment that isn’t pressured. I can then bring these solutions with me into work and apply them when their needed. It’s very important to me that I do not let an accessibility related problem get in the way of me doing my job independently and efficiently.