• Category Archives Accessibility
  • Personal » Accessibility
  • Windows 10s – A revolution for Accessibility

    Microsoft released the Surface laptop last week. As someone who absolutely loves the Surface Book, I’ve been following with interest the developments in the surface line. I’m not hugely blown away with the Surface Pro line but that’s a reflection of the state of touch screen access using screen readers more than the device itself. Physically, I think the surface pro is very nice to hold, powerful enough to run all standard productivity and development tools and durable enough to be used for both business and pleasure every day. The surface book however is the perfect computer. When relaxing on the bus on the way to and from work I can easily consume content but with this machine, an I7 with 16GB RAM and a 512GB solid state hard disk, I can just as easily run up a few virtual machines, Visual Studio 2017 and a suite of debug and analysis tools and it hardly breaks a swet. It’s perfectly comfortable to type on for 12 hours a day and the battery life is just brilliant. I sound like an advertisement for Surface Book which is fine. It’s easily the nicest laptop I’ve ever owned.

     

    The Surface Laptop doesn’t quite tick all the boxes for me but that’s a good thing at the moment. It is expensive. Maybe too expensive for most people but it’s what it represents that is important. The Surface line is aspirational. It’s expensive but it’s a product line that shows off the power of Windows. It’s Microsoft’s way of showing the world what can be done with devices that run Windows and as a result, PC manufacturers are following their lead. This means that although the Surface Laptop is at the higher end of the price scale, the introduction of Windows10S in parallel means that Microsoft partners are again following Microsoft’s example by releasing their own devices built on Windows 10S. This will mean lower prices for lower spec machines that although do less, still do more than a device like the iPad or Android tablet.

     

    What has all this got to do with accessibility for Blind people? The answer is unfortunately a bit long but please stick with me for a minute so I can explain. Because the result in a year or two could be huge if the current pace of change is retained.

     

    I love the Jaws screen reader for what I do every day. But for many people, all they need to use is a browser and Microsoft Office. I’m not sure if Jaws will be as compelling in the long term as it is right now for the average user with the recent developments in Narrator, the built in screen reader for Microsoft Windows. Not that I’m saying I could personally use Narrator every day. I think it’s still years behind Jaws but look at Voiceover, the built in screen reader for Apple’s OSX and IOS operating systems. It’s also years behind Jaws and it has quite a few bugs but yet, it’s probably the most popular screen reader in the world at the moment. It is highly likely that it has taken over from Jaws in terms of overall screen reader market

    share as more blind users have access to mobile devices than Windows PC’s I’m sure. Those same users might be happy paying $189 to $1200 for various specs of low powered laptops.

     

    For those of you who remember or paid any attention to Windows RT, this really isn’t that. From an accessibility perspective, Windows RT was completely unusable. But with Surface pro, Surface book, the surface studio and now the surface laptop, a blind user can turn it on, hit two buttons and get access to the core of the OS without a commercial screen reader. I bet Freedom scientific are very worried about this – and if they aren’t, they certainly should be.

     

    I’m talking to Microsoft in Ireland and the US every week at the moment about offers for education as that’s the area I’m now working in. I’m consistently delighted when they raise the topic of accessibility without being prompted. There’s a fella heading up the applications for children that includes Minecraft who is great at working on accessibility problems for many difficult areas.

     

    I think it’s a case of watch this space.

     

    I’m also putting my money where my mouth is. There’s an application called Whats up gold that isn’t working with Jaws at all at the moment. I’ve switched to narrator and Edge when using it as I get the best results. This should come as a huge shock for anyone related to the development of Jaws. It certainly shocks me. There are controls that Narrator is reading perfectly such as grid views, tree views and toolbars that Jaws isn’t even seeing in Chrome, Firefox or IE.

     

    I need Narrator to be more responsive and I’ve left feedback with Microsoft in relation to this so here’s hoping that it gets better. I can see myself using it more as time goes on unless Jaws gets a lot better for touch screen access.

     

    I travel a lot on busses so using the laptop isn’t always very comfortable. For that reason, I use a touch screen device such as my phone. I’d really like to be able to use my surface book more for consuming content on the go. If Narrator gives me this freedom first, then there will be no contest.

     

    This is coming from someone who has used Jaws as the primary screen reader for twenty years. So, I have a certain level of brand loyalty. So, the point I’m making is even with brand loyalty from a person who has used this software for 20 years, if Microsoft can take the lead, even I’ll switch. That should drive some serious innovation and changes in Jaws version 19. Because if someone like me will change over, someone who just uses a computer for browsing and Email will change much sooner.


  • Using PuTTY with Jaws 18.

    Please be aware that I don’t recommend that you use PuTTY exclusively for SSH access. Especially in Windows 10. There are a number of better alternatives out there at this point for most day to day use. I’ll add links to one or two below. However, there are times when PuTTY or Putty as it’s pronounced is just the best tool for the job so it’s important that you can get some feedback from Jaws.

    Note as of 26th October 2017

    Please note that this post is now out of date. You should use these fantastic PuTTY scripts instead as they provide much more complete functionality.

    I had posted a script before that worked with previous versions of Jaws and in fact, it would probably work with Jaws 18 as well but the SayNonHighlightedText function in Jaws 18 has been updated so it’s only right that I tweak it slightly and publish it here to be used in a PuTTY.jss file.

    Here’s the code:

    Include “HjGlobal.jsh” ; default HJ global variables
    Include “hjconst.jsh” ; default HJ constants
    Include “HjHelp.jsh” ; Help Topic Constants
    Include “common.jsm” ; message file
    include “MSAAConst.jsh”
    include “UIA.jsh”

    const
    NavigationByLineTickThreshold = 200
    globals
    int LastLineNavigationTick

    Void Function SayNonHighlightedText (handle hwnd, string buffer)
    ; NonHighlightedText Function for speaking all newly written nonhighlighted
    ; text.
    If GetScreenEcho () > ECHO_NONE
    && hWnd == GetFocus()
    If GetWindowClass(GetFocus()) == “PuTTY”
    && GetTickCount()-LastLineNavigationTick > NavigationByLineTickThreshold
    ;New text should be spoken only if it is not a result of navigation by line.
    ;This prevents double speaking when navigating through a command history,
    ;since the SayLineUnit will already have spoken the new text.
    Say(buffer, OT_NONHIGHLIGHTED_SCREEN_TEXT)
    ;Now clear LastLineNavigationTick, just in case more new text appears shortly after the navigation.
    LastLineNavigationTick = 0
    Return
    endIf
    endIf
    if (GetScreenEcho() > 1) then
    Say (buffer, OT_NONHIGHLIGHTED_SCREEN_TEXT)
    endIf
    EndFunction

    There are a number of great alternatives to PuTTY.
    Over on Git Hub, Microsoft have a rather nice SSH Powershell module that provides a method of accessing an OpenSSH server on Linux from within Powershell.
    The best way to use SSH on Windows in my opinion is to install Git. Be sure that you choose to make git features available from the command line so that you can use SSH without starting the Git Bash shell first.
    Lastly, another really good option if you are using Windows 10 is to install Bash or Linux for Windows. This is an add on that you can install from within Programs and Features\Windows Features.

    There are now more ways than ever to access your Linux servers over SSH from within Windows. Have fun!


  • The Apple Watch with Voiceover review – Day 7

    Someone made a comment a few days ago about the Apple Watch and specifically Voiceover that I found kind of interesting. She said that the Apple watch isn’t like a normal talking watch. A normal talking watch has very slow speech feedback and the volume is static. It also usually chimes before announcing the time to the world. An Apple Watch might not be as discrete but it has a coolness factor at the moment that slightly negates the annoying factor for people around me. I can only hope that lasts. Her point was that the Apple Watch speaks much faster because I have it configured at that speed and if I’m going into an environment that’s quieter I can set the volume of the speech appropriately so that while in a quiet meeting for example it doesn’t shout my notifications out to the world.

    In most reviews of the Apple Watch I’ve read that people get annoyed by the number of notifications. I have to say that I’m not annoyed by them at all. I find that I actually miss most of the notifications that come in to the Apple Watch. This is because the tap is so slight that unless I’m not busy I really won’t notice. The iPhone demands attention but the Apple Watch quietly asks for it.

    I’m a techy. I love all things techy therefore it’s a given that I’ll get to like the Apple Watch but I don’t love it. I don’t see myself feeling naked without the Apple Watch like I do when I forget my phone. Sorry. That’s not quite true. I don’t feel naked without my phone but I feel like I’m missing something important. The Apple Watch isn’t that important to me. Apple announced on Monday that Apple OS version 2 will be out in September or October. I’m really hoping they address the short comings I’ve outlined on this blog in the past week. I’ll be emailing accessibility@apple.com to make sure they are aware of my problems, complaints and annoyances. I can only hope that every other Voiceover user of the Apple Watch does the same thing. If people don’t tell Apple what they are doing wrong they really can’t expect them to fix the problems for the next release.


  • Apple Watch with voiceover review – Day 2

    Day two with the Apple watch was quite uneventful.

    I was working from home so I reached my standing goal and my activity goal but I didn’t get anywhere near reaching my exercise goal. I’m hoping today will be a little better.

    Because I was at home I also didn’t have any problem with being unable to hear the watch due to background noise.

    I spent some time before work learning more about it. I still haven’t figured out how to turn off the noises for Voiceover but I learned that I can increase and decrease the volume reasonably easy. Double tap the screen with two fingers then slide up or down. The problem that I’ve encountered however is that when you release your fingers from the screen the volume can go up or down a bit. It’s not very accurate. It’s also not all that efficient so it can’t be done in a hurry.

    I also noticed that in glances you can move through the items by using the scroll area at the bottom. This is much faster than flicking up and down and then double tapping on next or previous item.

    I’ve enabled digital crown navigation. This can be done by triple tapping with two fingers. I like this method of navigation. Especially for notifications. The problem I have encountered though is when you use it to quickly move down to the last control labelled dismiss voiceover doesn’t always tell you that you’re there. It feels like an unfinished feature.

    I looked through the manual yesterday to try to find a list of Voiceover gestures. I had no success. If they are in a manual, they are well hidden.

    I’m still very irritated by the watch constantly turning on when I move my hand. Obviously I use my hands for everything. Finding things, opening doors, typing, playing music, my guide dog etc. The watch has absolutely no awareness of this though and constantly turns on and off. Each time it turns on Voiceover plays a sound and speaks the time. The problem is, I like this feature but I’d prefer if it was more intelligent. The funny thing is, I’ve read other reviews of the Apple Watch that have complained that the wrist movement isn’t fluid enough. In other words when the reviewers moved their wrist the watch face doesn’t turn on. Maybe this is something Apple have rectified and as a result have made it over sensitive.

    I have liked getting the notifications on my wrist though. Especially for work. I don’t get over loaded so it’s nice to get the important things even when I’ve stepped away or I’m talking to someone.

    Speaking of stepping away, one of the draws of the Apple watch for me is the fitness and activity side of things. I know I need to be more active. This is showing me exactly how much more. It may not be as accurate as dedicated devices on the market but it’s accessible and it’s accurate enough to send me in the right direction.


  • The apple Watch with Voiceover – Day 1

    The Apple watch on my wristI ordered the Apple Watch a few days after it was officially available in April and it arrived yesterday, a bit sooner than I had expected.

    I had tried one in the Apple store in Belfast back in April but the demonstration models didn’t have the ability to enable Voiceover so my conclusion wasn’t definitive on if this was going to be a benefit or not. However, as I like all things techy, I decided to go and buy one regardless.

    I want these reviews to be comprehensive without being too long so let me jump right into it.

    Firstly, there is an offal lot of packaging. I don’t know how Apple is ticking its sustainability box when it has so many little bits of packaging around the watch. It came in a card board box. Inside this was a card board shell which suspended another card board box. Inside this was a plastic box with the watch in the middle. The watch was also wrapped in about four types of plastic from the outside of the box to the strap.

    Fortunately it had plenty of battery when I started with it. It wasn’t at 100% but it was probably around 90. I turned it on, successfully paired it with my phone and within a few minutes the Apple watch was talking and working well. It’s just as well it was a quick process as I got it into my hands at 7:35 and I had to be out by 7:50PM last night.

    The fact I had to go straight out after the watch was configured meant that I didn’t really give myself enough time to get even slightly comfortable with this new user interface. I knew how to check the time, get to glances, open notifications and move around applications but I hadn’t yet customized the watch face or installed the update to 1.01.

    On the up side, bringing the Apple Watch out straight away meant that it was thrown into a real life scenario right out of the box. I had to meet the rest of my family for a big event so the room that we were in was very noisy. This posed a challenge for the Apple Watch from the perspective of a Voiceover user. How do you gain the benefit of the apple watch as a discrete extension of your iPhone when you need to have the volume up so high that everyone in the room can hear it or you need to hold your arm close to your ear like someone doing a type of very weird salute? It was one of the reasons I have a lot of reservations about the Apple watch. I have always hated talking watches with a passion. Do I really want to use one?

    I’m in noisy environments a lot so I’ll explore this potential problem more as the days go on.

    The other problem I had was when we were eating. I’d move my arm and the watch would start talking. It’s very irritating but yet I see the benefit of this feature being enabled when I’m walking. Unfortunately there’s no quick way of disabling this that I know of however I must say that I haven’t bothered reading the manual yet. I probably should have read some of this by now but I generally only read the manual when all else fails.

    I got the opportunity to configure the watch a little more last night when I got back at 1AM. It seems easy enough to use.

    One complaint I have is that voiceover is far too sluggish. Now, that doesn’t mean that it’s very slow to respond, it just means that it’s slower than the phone to respond to flicks and taps. This is probably an unfair comparison to make. The phone has a much more powerful processor but if the screen reader doesn’t respond instantly to gestures the user interface feels sluggish and the experience feels very cumbersome.

    I’m being harsh. This is the first version of the Apple Watch but for the price I’ve paid for it, I demand a certain standard. The Voiceover implementation doesn’t begin to live up to that standard.

    One of my plans when buying the Apple watch was to make my own watch face. This wouldn’t be a visual face, it would use the taptic engine to provide the time in a sequence of vibrations. Unfortunately Apple put a stop to my plan by restricting the development of watch faces.

    One very positive point to the Apple Watch is it is smaller and lighter than my TISSOT TOUCH SILEN-T watch.


  • Jaws scripts to virtualize list view items.

    In work at the moment, I spend a huge amount of time in massive list views. These list views could have thousands of items and up to 256 columns. At first, reading them with Jaws was one of the most stressful things I’ve ever done. The customize list view scripts don’t work because of a bug in the user interface of this application. Every time focus changes to or away from the main application, the control that previously had focus will no longer have it when you return to the main window. It is a torturous situation to be in because when I’m in this application, I need information quickly and accurately. Also, most of the columns contain numbers that are very important so for the first week or two of starting in my new job you’d find me with my head down in major concentration mode trying to listen to Jaws fly through all the columns so I could pick out the tiny nugget of information that I needed from column 10 or worse, 210. I’d finish the day completely exhausted from this effort so I badly needed a solution.

    Jaws has a script that will allow you to read the first ten columns of a list view but this is stupidly limited when you consider that this only allows a user to intentionally work with ten list view columns. What I needed was a script that would let me walk forward and back through each column from the beginning to the end of that item. If I found something that I needed, I could then listen to it and it would be clear. You would not believe how much easier this made my day. However, it became clear that I couldn’t always trust myself to remember all the information that I was being given by Jaws when working through these list view items so I decided to expand the script a little to add the current column into the virtual viewer. This is handy as I can then examine the text character by character if I need to and I can use the clipboard to store that text if I need to use it in notes or SQL statements that will pull more information from the database again, this minor change made things much easier for me.

    However, there was one more thing that I needed. When sighted people were using these lists they could compare two items visually much faster than I could with my scripts. Yes, the column I was on was retained even when I was on different list items so say / virtualize current column worked very well. However, it wasn’t quite what I needed for every situation. The next solution has proven to be even more helpful than the first. Now, when I choose to, I can virtualize the entire list item so using the virtual viewer, I can arrow up and down the list of column headers and the text within each one to get the data in the best format for me to understand quickly. Thanks to some excel queries, I can virtualize two columns then use a dif function to work out the differences much faster than anyone who can see.

    These scripts have actually changed the way I work with list views. I really hope they are helpful to you.

    Disclaimer: I won’t support these. I’m not a script writer; I sometimes manage to scrape scripts together with the help of others or by taking script chunks from existing scripts written for Jaws. In this instance, I butchered a few other scripts to make these. Moving back and forward through list items works very well but for some reason, the count is slightly off so you may find that you need to move to the next list item twice to make it actually go forward.

    I added these to my default scripts because I needed this functionality in a number of applications after a while. Add them where ever you like. At the top of your file in the globals section, you need to declare a variable for holding the current list column number. The line is below:


    int CurrentListColumn

    Remember to add a comma from the previous variable declaration or your script won’t compile.

    Here are the scripts. Add them to the bottom of the default file if you choose to use that. Remember to also assign keyboard commands to each script. I’m sorry, but if you aren’t particularly sure how to do this, you may need to ask the Jaws script mailing list or read the script help topics. I cant promis to help you out as I’m busy enough as it is.


    Script ReadNextListviewColumn ()
    var
    int nCol,
    int nMaxCols,
    string sHeader,
    string sText,
    handle hCurrent,
    int nCurrent
    If (CurrentListColumn<1) then CurrentListColumn = 1 EndIf If !(GetRunningFSProducts() & product_JAWS) then return EndIf let hCurrent=getCurrentWindow() if !IsTrueListView(hCurrent) then sayMessage(OT_ERROR,cmsgNotInAListview_L,cmsgNotInAListview_S) return endIf let nMaxCols=lvGetNumOfColumns(hCurrent) let nCurrent=lvGetFocusItem(hCurrent) let nCol=CurrentListColumn let sHeader=lvGetColumnHeader(hCurrent,nCol) let sText=lvGetItemText(hCurrent,nCurrent,nCol) say(sHeader,OT_NO_DISABLE) say(sText,OT_NO_DISABLE) say(IntToString(CurrentListColumn),OT_NO_DISABLE) if (nCol < nMaxCols) then CurrentListColumn = CurrentListColumn + 1 EndIf if (nCol > nMaxCols) then
    SayFormattedMessage(OT_ERROR,formatString(cmsgListviewContainsXColumns_L,intToString(nCol),intToString(nMaxCols)),formatString(cmsgListviewContainsXColumns_S,intToString(nCol)))
    return
    endIf
    EndScript

    Script ReadPreviousListviewColumn ()
    var
    int nCol,
    int nMaxCols,
    string sHeader,
    string sText,
    handle hCurrent,
    int nCurrent
    if (CurrentListColumn > 1) then
    CurrentListColumn = CurrentListColumn - 1
    EndIf
    If !(GetRunningFSProducts() & product_JAWS) then
    return
    EndIf
    let hCurrent=getCurrentWindow()
    if !IsTrueListView(hCurrent) then
    sayMessage(OT_ERROR,cmsgNotInAListview_L,cmsgNotInAListview_S)
    return
    endIf
    let nMaxCols=lvGetNumOfColumns(hCurrent)
    let nCol=CurrentListColumn
    let nCurrent=lvGetFocusItem(hCurrent)
    if (nCol < 1) then let nCol=1 endIf if (nCol > nMaxCols) then
    SayFormattedMessage(OT_ERROR,formatString(cmsgListviewContainsXColumns_L,intToString(nCol),intToString(nMaxCols)),formatString(cmsgListviewContainsXColumns_S,intToString(nCol)))
    return
    endIf
    let sHeader=lvGetColumnHeader(hCurrent,nCol)
    let sText=lvGetItemText(hCurrent,nCurrent,nCol)
    say(sHeader,OT_NO_DISABLE)
    say(sText,OT_NO_DISABLE)
    say(IntToString(CurrentListColumn),OT_NO_DISABLE)
    EndScript

    Script VirtualizeCurrentListColumn ()
    var
    int nCol,
    int nMaxCols,
    string sHeader,
    string sText,
    handle hCurrent,
    int nCurrent

    If !(GetRunningFSProducts() & product_JAWS) then
    return
    EndIf
    let hCurrent=getCurrentWindow()
    if !IsTrueListView(hCurrent) then
    sayMessage(OT_ERROR,cmsgNotInAListview_L,cmsgNotInAListview_S)
    return
    endIf
    let nMaxCols=lvGetNumOfColumns(hCurrent)
    let nCol=CurrentListColumn
    let nCurrent=lvGetFocusItem(hCurrent)
    if (nCol < 1) then let nCol=1 endIf if (nCol > nMaxCols) then
    SayFormattedMessage(OT_ERROR,formatString(cmsgListviewContainsXColumns_L,intToString(nCol),intToString(nMaxCols)),formatString(cmsgListviewContainsXColumns_S,intToString(nCol)))
    return
    endIf
    let sHeader=lvGetColumnHeader(hCurrent,nCol)
    let sText=lvGetItemText(hCurrent,nCurrent,nCol)
    say(sHeader,OT_NO_DISABLE)
    say(sText,OT_NO_DISABLE)
    say(IntToString(CurrentListColumn),OT_NO_DISABLE)
    UserBufferClear ()
    UserBufferAddText (sHeader)
    UserBufferAddText (sText)
    UserBufferActivate ()
    SayLine ()
    EndScript

    Script VirtualizeAllListColumns ()
    var
    int nCol,
    int nMaxCols,
    string sHeader,
    string sText,
    handle hCurrent,
    int nCurrent

    If !(GetRunningFSProducts() & product_JAWS) then
    return
    EndIf
    let hCurrent=getCurrentWindow()
    if !IsTrueListView(hCurrent) then
    sayMessage(OT_ERROR,cmsgNotInAListview_L,cmsgNotInAListview_S)
    return
    endIf
    let nMaxCols=lvGetNumOfColumns(hCurrent)
    let nCol=1
    let nCurrent=lvGetFocusItem(hCurrent)
    UserBufferClear ()
    while nCol<=nMaxCols let sHeader=lvGetColumnHeader(hCurrent,nCol) let sText=lvGetItemText(hCurrent,nCurrent,nCol) UserBufferAddText (sHeader) UserBufferAddText (sText) LET nCol = nCol + 1 EndWhile UserBufferActivate () EndScript


  • First time with the Raspberry Pi

    Thanks to Emma and her mother, Santy was very good to me this year. When they asked what to get the man who now has everything he wants, my answer was simple. A Raspberry Pi and a few things that will let me mess around with it.

    So, this morning I unwrapped a Raspberry Pi B model, a power supply, an extra-long USB cable for when I want to power it off my laptop, a case for the Raspberry Pi, a camera board and a case for that board. Yes. You read that. A camera board. I want to play around with motion detection, colour detection and generally interfacing with the real world through Python.

    First thing that struck me was the amount of tiny boxes. There were boxes for:

    • The Raspberry Pi board
    • The USB cable
    • The power supply
    • The camera board
    • The camera board case
    • The Raspberry pi case
    • The SD card

    The second thing that struck me was the tiny size of the Raspberry Pi and the camera board. The Raspberry Pi is no bigger than a credit card. Going from the shortest side with the USB ports facing you, you find from left to right, the LAN port and two USB ports. Turning the device around to the right so that the long edge is facing you, there is one composite port and one audio out port. Continuing this time on the next short edge, you find the SD card reader on the bottom of the board and a micro USB port used for powering the device to the right of this on the top of the board. Continuing on around to the next long edge you find the HDMI port. If your television supports this, audio will also be piped through this port. All the ports are on the top of the card and the card reader is on the bottom.

    The camera board is connected by a ribbon cable that is attached at one end to the board. The other end attaches onto the Raspberry Pi just behind the LAN and USB ports. Getting this lined up took sighted assistance from my wife I must admit. I probably could have done it with time but I think I might be getting a bit lazy where this kind of thing is concerned. You will agree with me if you see the camera board. It’s really tiny! The case that you can buy for it is very small as well. The camera goes in to the back. There are two very small place holders at the top that hold it in place. Their hard to find though.

    Putting on the case is very straight forward and didn’t require any sighted assistance at all. The only thing I would say here is that getting the four screws in was actually quite difficult. I’d be a reasonably strong person I think but it took a lot of strength to get those screws in. The other thing is, I’m glad that I have a screw driver set for fixing ultra-portable laptops as the screw heads wouldn’t have been compatible with a standard head. The only reason that I mention this is, the Raspberry Pi is meant to be a device that is usable by kids. Getting these screws in would definitely require adult assistance. Either that or last night’s Guinness had more of an impact than I thought.

    Preparing to boot it for the first time, I first had to download the Raspbian image to install it to the SD card. I had done this ahead of time by going to the downloads page on the Raspberry Pi website. That’s one of the best download pages I’ve seen actually. So clean and uncluttered and the Win32 disk imager software that I needed to install the Raspbian image onto the SD card was available as a link to make the process really straight forward. I wish I could say the same for the Disk imager site. It’s hosted by source forge, a website that I don’t particularly like. It’s full of pointless regions and the download link is very badly labelled. If you’re looking for the download, you’ll find it by searching for “download the unnamed link”. That’s no reflection on the Raspberry Pi of
    course. It’s just worth noting if your preparing to follow the same process I did.

    The Win32 disk imager archive is 5.41MB and the Raspbian image I downloaded is 783MB.

    I had read previously that the interface for Win32 disk imager was not accessible as it is written in QT and this was certainly the case for me. However, I was able to muddle through. Basic instructions might be useful for other screen readers so if you’re interested, give me a shout and I’ll write them up for you.

    When the disk imager process finished, I had a quick look at the SD card. In there, I found a config.txt file. Curiosity of course got the better of me so I went in and had a look. I found an overclocking option so I uncommented it. I had read in a few of the forums that it was safe to do this so I thought it was worth a shot. There was a link to the Raspberry Pi site at the end of the file but after skimming through the page for a moment I decided that I had enough to get started with. I’ll probably tweak this config file a little more when I’ve played around with the Raspberry Pi for a few days.

    Right. Now, Raspbian is installed onto the SD card, the case is together with the ribbon cable sticking out and attached to the camera, I have all the cables etc. that I’m going to need sitting to one side so all that’s left is to connect the tiny device to my television. I know the first set up screen isn’t accessible so I’ll need Emma’s help with it but after that, I’ll ensure SSH is enabled and get going.

    I’ve also bought a 7 port powered USB hub. The Raspberry Pi doesn’t have enough power to support many unpowered USB devices so when I’m connecting the Arduino to it I’ll need to give it a bit of a boost.

    Connecting the Raspberry Pi to the television and giving it power was absolutely no problem at all. Within a minute or two, the set up screen launched and with the assistance of Emma, my wife the system was configured in no time. A few things were a little unusual. For example, instead of selecting your keyboard layout, it wanted you to select the keyboard make and model. The localization screen was also a bit confusing. Over all, the configuration interface wasn’t as snappy or responsive as others that we have used but this is most likely as a result of the low processing power of this device.

    Of course, the first thing I changed was the user password. I also changed the hostname and checked for updates. Aside from that, oh, and increasing the partition size, there was nothing else I had to do.

    One thing I should have done right away was change the IP to a static address. I have DHCP on this network of course but when I plugged it in to one of the LAN ports in my office, the pi got a completely different IP for some reason. That’s really strange as usually my DHCP server recognises the Mac and continues to respect the lease. You wouldn’t believe how much time I wasted trying to figure out what note on my network the Raspberry Pi was. I have far too many things connected in this house so when it comes to trying to sift through DHCP logs it’s very cumbersome. I gave up and just set the address manually at the end of it.

    The first thing I did when I got connected via SSH was update the packages and the firmware. I’m surprised the start-up / configuration wizard didn’t do this automatically. It seemed logical that it would check for all available updates when the option to apply updates was available in the menu.

    After a few reboots and some testing, I’m now in a position where I can begin playing around with the pi. I’m really looking forward to this. I’ve read so much online and I’ve bought and read so many books on the subject in the past few weeks that I’m now really looking forward to getting my hands dirty.

    The first thing I’m going to do is get something working using the camera as a motion detection light bulb with This handy tutorial as a starting point.

    I didn’t use any of the information in this next site during my process of getting up and running with the Pi but I would like to commend their work. It’s people like them that continue to push accessibility forward and I would hope they are recognised for the work that they have done. Please look at the Raspberry Vi website for more details and to get involved. I learned of this project while searching on Google for any accessibility problems I may encounter.

    Finally, of course, I have to thank Emma and her mother. I’m into quite a few glasses of wine later but I’ve had a lot of fun playing around with this new toy today.


  • An introduction to Android and accessibility with talk back.

    In this podcast I wanted to give you a sense of what I like and what I don’t like about the Android operating system. I’m using the talk back screen reader so my perspective will be mostly focused on accessibility of the platform and apps for this particular introduction.

    Listen to my introduction to Android and accessibility.


  • A draft introduction to Android accessibility with Talkback.

    This is by far one of my weaker podcasts but it’s late! Give me a break! I just wanted to set up the equipment and get the ball roling. Please leave me your comments, suggestions, questions and ideas. I will definitly cover more about this platform over the next few days.

    My thanks to users of this platform for answering my many questions. Please visit The blind geek zone for a very interesting podcast by Mike Arrigo. He does a much better job than I have done introducing the platform.

    Listen to the first introduction to Android and talkback.

    Again, sorry if I sound tired and half a sleep. I’ll provide a better introduction shortly.


  • Jaws 14 now requires Internet Access to run.

    I have encountered a problem with using Jaws on servers since the release of Jaws 14.

    fsbrldspapi.dll is loded by Jaws during Installation if your installing it while standing in front of the server but if your installing Jaws remotely using the /type remote switch the installation doesn’t speak or provide Braille output. Therefore the fsbrldspapi.dll file will be loded when you run Jaws for the first time.

    When you are installing or running Jaws on a system be it a server or workstation running on Windows 2008, 2008R2, 7 or 8 without Internet access you will encounter the following error message:

    JFW.EXE. Referral returned from the server.

    It would appear that this issue began popping up around April with an update of Jaws 13 that was released around that time.

    The problem is that the Jaws driver signing program requires trusted certs that are downloaded from Microsoft on an as needed basis.

    More details about how trusted certs are downloaded in Windows 2008 and 2008R2 can be found at the following Microsoft KB link:
    http://support.microsoft.com/kb/931125

    In previous versions of Windows up to XP and 2003 Windows updates included these certs.

    However, it would appear that it is all but impossible or at best very difficult to apply these certs to servers that are off line. The only way I can see of doing it at the moment is to find the required cert and install it on each system. Probably through a SCCM advertisement.

    I have asked FreedomScientific to get back to me on this but although I know that a lot of their staff are on vacation this week due to the thanks giving holiday I have no confidence that they will resolve this new dependency.

    In my opinion this is a bug that should be resolved. At the very least, a specific error message should be provided when Jaws cannot start due to this issue. What really should happen is that when certs can not be used Jaws starts as much functionality as possible without loding this DLL. In other words Braille wouldn’t be available.

    I know that some users really need braille and I’m being a bit selfish here so I’m really sorry.

    I have reported a large number of bugs to FreedomScientific since the release of Jaws 14. I am hoping that they will be resolved however I get the usual answer of “No one else has reported this” and “We cant reproduce that problem here”. I feel like i’m fighting an up hill battle.

    If anyone has any suggestions then I’m all ears. Otherwise, if you could Email FreedomScientific support with any problem your having with Jaws 14 we might get some pressure put on the developers to prioritise a bug fixing excersize.