Accessibility testing by non-disabled developers

Whenever I create a website, at some point there always arises an issue of its accessibility for people with disabilities. Mostly we’re talking about users of screen readers. Problem is that I don’t use screen reader myself for day-to-day activities anddesign of these programs remains uncharted territory for me so no matter how hard I try, I can be never sure that I achieve intended effect.

Similar impulse struck me few days ago when I decided to check how accessible my personal website is and to fix any possible problems. I focused on screen readers because they’re arguably the most widespread Assistive Technology used not only by visually impaired, but also by people with other types of disabilities, according to WebAIM survey.

I have already increased contrast of pages few years ago and default font is quite big with lines not too long to ease eye strain. This more-less covers people with low vision who occassionally use screen readers or software magnifying glass. Next steps mostly touch skeleton of site: its HTML code. Two particular documents are great resources for working with accessible HTML:

  1. Using ARIA, a very helpful knowledge refresher, covering most of the principles which should be followed by developers who want to make their sites better places;
  2. Allowed ARIA roles, states and properties, which provides a list of ARIA roles allowed in HTML which are important for screen readers, especially when creating non-standard widgets (e.g. buttons acting like links).

On top of that there are some static validators for HTML correctness which also have some focus on accessibility features of web pages. They’re extremly helpful because even with all that knowledge it’s still easy to miss some spots. I tried two of them: WAVE and Nu Html Checker and was very happy with how they worked.

But of course the most important test is actually hearing how website sounds. Because I don’t use Windows, I can’t test against the most popular readers: NVDA and JAWS. On Linux the most successful screen reader is Orca so I decided to try it first.

This is when problems started.

Orca

It took me 2 days to force Orca to read websites, but maybe it’s all on me1. I didn’t realise that Orca only detects new windows which are created after it was started itself, and is, sorry for pun, blind for the others. Unless it isn’t, like for nm-applet, which worked correctly from the beginning. Under the hood Orca uses AT-SPI DBus interface to communicate with other programs. Not knowing the details of the interface it is equally possible that Orca didn’t notify programs about its presence and that programs didn’t emit or subscribe for correct events.

As end-user I shouldn’t be troubled by any of this and I hate to say that this bad initial experience heavily impacted my whole feelings about using screen readers. I’m sorry for users who aren’t tech-savy and have to deal with problems like that. Frankly, I’m not even sure if I’d be able to install Orca at all if I were blind. Debian promises in its installer’s manual to automatically install support for software speech synthesis if it was activated during installation but I assume it means espeak or other TTS package and not screen reader. I hope that appropriate screen reader is eventually installed and that it is enabled automatically when graphical environment like GNOME or KDE is selected.

Let’s move on. Apart from its preferences window Orca doesn’t provide any visual feedback. I’m sure it’s partially because desktop computers graphical libraries are quite fragmented so I don’t blame Orca here. I simply think that additional indicators (e.g. a frame around currently read paragraph) would be extremely helpful for people who are not completly blind.

When Orca finally started reading web pages, I tried the following things, which should be the most common for visually impaired users:

  • displaying and jumping between all headings and links;
  • jumping through paragraphs back and forth;
  • listing all links and jumping through them.

Everything was going smoothly when suddenly I lost ability to navigate documents. I think I pressed wrong keys accidentally: keyboard shortcuts for screen readers are quite esoteric so it isn’t hard to make mistakes. I couldn’t bring back Orca to the state in which I’d be able to perform my tests and I had to restart it to continue tests.

TalkBack

After my initial failures with Orca I tried TalkBack, a screen reader for Android produced by Google. It is licensed under a free Apache 2.0 license and is freely available from F-Droid store, but it’s an old and buggy version there. If you don’t mind using Google Play Store, then version available there is more stable and free of bugs I found in F-Droid’s version.

TalkBack offers gesture interface which entirely replaces the ordinary way of interacting with OS. For example you can’t click links just like that. Instead you have to select them and double tap anywhere on the screen to actually click them. Links can be selected e.g. by using gestures which walk all elements of foreground program. Other gestures show status bar and handle other common activities. When element is selected, text associated with it is read by system TTS service. Default Lineage OS TTS engine (Pico TTS) didn’t work on my phone, probably because it doesn’t support language of my system. Espeak worked but it crashed when I tried to configure it. Android phones with pre-installed GApps should have non-free Google TTS available which arguably provides much better experience. If you’re privacy-concious and have a rooted phone, you can download Google TTS directly from Google Play Store, then download necessary offline language packs and then block internet access for the whole application via AFWall.

Time for bad things. In my opinion TalkBack is too sensitive by default (but I think it can be configured somehow - there are lots of options which I didn’t check). Way too often I scrolled through several elements at once when I intended to scroll only one. If I were blind, I’d constantly miss everything!

Another thing is that AnySoftKeyboard, otherwise great software keyboard, refused to work with screen reader at all and I had to switch to AOSP keyboard just to type anything. I filed a bug on ASK’s Github page and hopefully its maintainers will pick it up2.

Once again my tests were succesful. Although I failed to discover how to quickly scan page by headings, I was able to read it element-by-element with no problems.

Afterthoughts

I stopped my tests after checking these two screen readers. The whole experience was very unpleasant, took way too long and I don’t feel like I have tested my simple site thoroughly. I had problems with installing and using assistive applications almost all the time. I’m sure that partially it’s on me and my lack of experience with this type of technology, but come on! Installing assistive technology should be guided and intuitive from the very first second! Why do I have to read manuals or learn about esoteric techniques of enabling screen readers during OS installation/configuration?

For example Android requires connected headphones during initial configuration to enable TalkBack system-wide, otherwise one has to navigate to the bottom of settings menu and enable it there – without a sight. Maybe it’s a clever trick but a message from phone’s main speaker saying e.g. “Hello, to use a screen reader connect your earphones” would be much friendlier in my opinion.

Shortcuts used by Orca are in my opinion worth at least a paragraph or two of criticism which I’m not going to write because it might be speaking my inability to comprehend problems of visually impaired people. However, from a perspective of non-disabled developer, who wants to test his rather straightforward application, screen readers are extremely hard to use. I think they should provide modes with visual hints for non-disabled people, like popups or tray bar icons where you could simply click and select wanted actions. Without aid like this it becomes very hard and frustrating to test applications against screen readers, especially for independent developers with limited resources. I’m not surprised that many simply don’t test them at all.


  1. Donald Norman would very likely disagree. According to his book, The Design of Everyday Things, incorrect usage of things is almost never a fault of users, but it’s a failure of bad design of these things; according to him users too frequently blame themselves when they should blame designers. 

  2. When I was writing this article, it remained unanswered.