July 2008

"The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."

Tim Berners-Lee

While it seems intuitive that applications, especially open source ones, should allow all to use and enjoy them, many developers are unaware of the need for accessible applications. Providing accessibility in information technology is not difficult, but it does require a basic understanding of different types of disabilities, commonly used assistive technologies, and the special accessibility features built into languages and standards. Most of all, accessibility requires a conscious effort and a desire to include everyone.

The accessibility of computer software has seen drastic improvements over the past two decades. This article reviews this progress, examining the technologies developed and offering guidelines for developers to create accessible applications.

The Need for Accessibility

Up until recently, the largest driving force behind desktop computing environments has been Microsoft, first with MS DOS, followed by variants of Microsoft Windows. These operating systems were not designed with the needs of people with disabilities in mind. Many, including those who were blind or physically disabled, were unable to use applications which were written for Microsoft operating systems. These applications assumed that computer users could:

  • read and react to text and images displayed on the screen
  • type on a standard keyboard
  • select text, pictures, and other information using a mouse
  • react to sounds played

The last point is somewhat less of a limitation as most software doesn't rely exclusively on audio to relay feedback.

If a person was unable to do one of the above-listed tasks, they found themselves unable to use many popular computer applications. Here are some of the groups of people that have problems doing some of those tasks:

  1. Print disabled: blind, low vision, obstructed vision, dyslexic, cognitively disabled and illiterate individuals.
  2. Physically disabled: users with amputations, paralysis, repetitive stress injury, cerebral palsy, muscular dystrophy, Parkinson's or other problems limiting mobility.
  3. Hearing impaired.

We must also consider the increasing numbers of aging baby boomers that are beginning to experience problems with their vision or dexterity. When you add all these groups together, a lot of potential users emerge.

Alternative Ways to Access Screen Contents

Most computer programs are so visual, they are difficult or impossible to use for persons with visual impairments. This need not be the case. Here's how non-print readers use desktop software today:

  1. Text-to-speech (TTS): Those who can't read print usually use talking programs. TTS is also useful for other print disabilities such as dyslexia and for those who cannot speak, in place of their own voice. Finally, this technology could be useful to mainstream users either on portable information appliances or to access information when the eyes are busy elsewhere.
  2. Magnification: enlarges the screen's contents. For those with low vision, it may suffice to use a larger font, a built-in high contrast theme, or an extra large screen. Otherwise, screen magnification programs may be used. These allow zooming in to portions of the screen, while following the mouse or the current focus. Screen magnifiers also have some built-in TTS and the ability to filter text and images through various color palettes, such as black on yellow for high contrast, or green on blue for low contrast.
  3. The Optacon: provided access to printed words, graphics and on-screen information by means of an array of vibrating pins the size of an index finger. The user used one hand to read the vibrating pins, and the other hand moved a mini-camera over the material to be read. Unfortunately, the unit is not currently produced, although there is occasional talk of resurrecting this useful device.
  4. Braille: a solution used for quiet reading, for detailed work, and by deaf-blind users. This can come in the form of hard copy braille printed on braille embossers, or from a refreshable braille display. These technologies require special drivers, braille formatting routines, and software based text-to-braille translation. The importance of braille itself must be emphasized. For those that read it, braille can offer higher levels of employment and life fulfillment.

Audio and braille based user interfaces are concepts for which software designers are not historically trained. Dealing with information when you're blind is like seeing everything through a mail slot: sequentially and methodically. Only small pieces of sequential, non-graphical information can be conveyed via TTS or a refreshable braille display. Whatever the user does, the software needs to respond with small, bite sized pieces of information. Ideally, intelligent decisions are made by the software so the user does not have to wade through non-relevant data.

Alternative Ways to Enter Data

Another problem is how people with disabilities get information into the computer. If you're physically disabled, you may not be able to type on a regular keyboard or use a mouse. Here are some of the alternative ways physically disabled people enter information:

  1. Sticky keys: make entering key combinations easy. For example to make a capital letter, first press the shift key, release it, then press the letter to be capitalized. The sticky key technique is utilized by people who have only one usable hand, or who have no use of their hands and type using a stick in their mouth.
  2. Single switch: these technologies enable persons with severe physical disabilities. Some users enter information by choosing from lists of options. They might press a switch down to begin moving a highlight bar through the list, and release the switch when the desired option is highlighted.
  3. Special keyboards: exist to make data entry easier. However, any special features are generally handled in the keyboard itself so that no special programming is required.
  4. Speech recognition: allows people to talk to the computer. This technology has come a long way, but still needs to be more integrated into mainstream software.
  5. Consistent keyboard support and hotkeys.

Testing with people that have disabilities generally benefits everyone. Use the accessible toolkit checklist to make sure your user interface (UI) controls adhere to standards.

The Lack of Context

To meet the needs of disabled users, many accessibility hardware and software vendors create products and software to help people who can not perform one of the aforementioned four basic tasks. Some examples of these assistive devices and software include: i) screen reading software; ii) TTS; iii) alternate input devices; iv) voice recognition software; v) screen magnification software; and vi) comprehension software, which allows a dyslexic or learning disabled computer user to see and hear text as it is manipulated on the computer screen.

An entire adaptive technology industry has grown to meet the accessibility needs of disabled users. One place to learn more about this industry is the CSUN conference in Los Angeles, which takes place every year in March.

The solutions developed by accessibility vendors have greatly increased the employment and personal fulfillment opportunities of hundreds of thousands of persons with disabilities, and the importance of their work cannot be diminished.

However, these solutions fall short of providing people with disabilities with a working environment which is completely accessible and usable. This is due to a simple problem of context: a user's interaction with a computer is governed by the situation in which this interaction takes place. When the user types something on the keyboard, or when an application displays text or images on the screen, the exact meaning of these actions is determined by the context in which they take place. For example, one application might display the image of a light bulb to indicate that it is processing a task, while another might display it as an indicator that it has completed processing a task. Without the application somehow notifying a blind person about the meaning of each of these light bulb images, the blind person is unable to understand what the application is attempting to convey. Similarly, voice recognition software often needs information about the context of a user's interaction, in order to make sense out of what the user is speaking. This context problem still plagues modern accessibility aids and solutions.

The first notable attempt at solving this problem was put forth by Microsoft in 1997, and is called Microsoft Active Accessibility (MSAA). This initiative realizes that complete accessibility is not possible without cooperation between applications and accessibility aids such as screen reading software or voice recognition software. The MSAA defines a Windows-based standard by which applications can communicate context and other pertinent information to accessibility aids.

This solution has seen only partial success, largely due to the fact that it requires significant changes to the applications being made accessible. Because most popular desktop and productivity applications are not open source, this forces disabled people to rely on the companies which produce the software to make it accessible. These companies are often reluctant for various reasons, including the large amount of time required to modify the original application. On a positive note, recent federal purchasing rules, such as Section 508, have caused many companies to pay attention and implement MSAA support.

Enter Open Source Software

Microsoft was on the right track with MSAA, but because the source code to most popular desktop applications used in large corporations is not publicly available, they were never made fully accessible. In open source, however, making the necessary accessibility modifications is very possible.

Open source software (OSS) is an ideal way to meet the needs of disabled users. Accessibility can be fully integrated into the core design, rather than tacked on as an afterthought. OSS also gives disabled programmers a chance to control their own destiny, by giving them the opportunity and the right to directly fix the inaccessible software themselves.

Furthermore, any software solution that can enable equality should by all rights be free of charge. If no special hardware is required, why should a disabled person pay extra money to use the same software as everyone else? That said, there is still an important role for adaptive technology vendors in creating special services and hardware, or even proprietary software on platforms where that is appropriate. The ideal situation would be for adaptive technology professionals to make money in the underserved areas of rehabilitation, training and support. Each end user has a unique set of problems, and in the open source world, providing highly customized solutions can be a business in itself. A number of companies have set out to improve on MSAA and further develop accessibility application programming interfaces (APIs) that would benefit everyone. Under the umbrella of the Linux Foundation, a group from IBM, Mozilla, Sun Microsystems and several assistive technology vendors have developed an enhancement to MSAA called IAccessible2. IAccessible2 is fully compatible with MSAA and enhances accessibility in areas where MSAA has weaknesses. With IAccessible2, access to rich content such as web pages, word processing or spreadsheet documents, or multimedia presentations is possible without having to rely on screen analysis techniques for context. This guarantees much more accurate access to rich content, allowing:

  • both screen reading and screen magnification software to present a better picture to the visually impaired user
  • voice dictation software to more accurately interface with such applications to implement features such as "say and select"
  • alternative input devices and software to interface with all possible elements without having to rely on screen coordinates or other such inaccurate mechanisms

Lessons were learned from earlier tries at making the Linux graphical user interfaces more accessible using the GNOME accessibility toolkit (ATK) APIs and the Gnopernicus screen reader and magnifier. This knowledge was transformed into the GNOME Assistive Technology Service Provider Interface (AT-SPI). This interface allows for a range of open source assistive technology solutions available for the GNOME desktop:

  • the Orca screen reader, for people with visual disabilities, offering speech and braille output and magnification functionality
  • the GNOME on-screen keyboard (GOK) software from the University of Toronto's Adaptive Technology Resource Centre

Another software project to take advantage of these improved accessibility APIs on Linux is Jambu, which supports alternative input for motor-impaired computer users who are able to only use one switch.

In parallel, several open source projects were enhanced to support the AT-SPI APIs. Most of the software included in the GNOME desktop such as Gedit, Pidgin, and Terminal, as well as many mainstream projects such as Mozilla Firefox, OpenOffice.org/StarOffice, Rhythmbox, or Tomboy are more accessible today than they were a few years ago. There are several success stories from people who actually make a living or got a new job because of the support of accessible applications on Linux.

Developers' Guidelines

As a developer, there are several suggestions for making your applications accessible. At the Mozilla Project, we encourage developers to follow the general front-end accessibility requirements. In particular, there are a number of potential gotchas when developing with the Mozilla XUL UI. Developers should follow the practical techniques listed in the Accessible XUL Authoring Guidelines. These guidelines cover many scenarios. If you take time to learn them, they will become an unconscious improvement to your design and engineering technique. Ensuring correct keyboard accessibility when developing new controls is important for providing consistency. Mozilla's XUL and HTML widgets already support proper keyboard accessibility.

New controls should support MSAA/IAccessible2 and ATK via the cross-platform nsIAccessible interface. Engineers can provide context simply by creating an nsIAccessible for each custom control and the infrastructure to do this is straightforward.

Conclusion

No matter what kind of work you do, the basis of accessibility is in the understanding that every user is different. The exact techniques may change depending on the engineering environment. There are many resources available to application developers for creating accessible applications, several of which are mentioned in the Recommended Resources section at the end of this article.

This article is based upon the Mozilla document "Software Accessibility - Where Are We Today?". The original is available from the Mozilla website.

Recommended Resources

Mozilla Accessibility Project

Links and Resources in Accessibility

Marco's Accessibility Blog

Dive Into Accessibility

Share this article:

Cite this article:

Rate This Content: 
No votes have been cast yet. Have your say!

Breadbasket