Date, time and description – these are the data most frequently checked while learning about an event, meeting or institution opening hours. In many cases you will even skip the description in favour of the date, time and title. As a user, you expect the calendar data be provided clearly, so that you don’t spend too much time interpreting the time of an event. This is one of the basics of good user experience.
Now imagine that your meeting time is described as “s a t zero five zero nine nine ten” or “open to wed”. Confusing, right? The first example is ” SAT 05/09, 09:10″ read by a screen reader. The second one is “Open: Tue – Wed”.
Problems with dates
It is difficult for a user to understand a date when they are not aware of the convention. The most obvious example is, for instance, 05/04/2021 —the US citizen will read it as the 4th of May, but for citizens of other countries, it means the 5th of April (when I was a child in my country, the same date would be written as 5IV2021). These examples, however, are cases that affect everybody, regardless of if the person has a disability and uses assistive technology to read digital content. Designers take these problems into account during the design process. What seems to often be missed in the design process are the needs of people who have cognitive disabilities, rely on assistive technologies, or both.
The way a screen reader user hears a date read depends on several factors:
- Speech synthesis,
- The screen reader and its settings,
- System settings,
- LANG value.
Text-to-speech and screen readers
Speech synthesis is often an underestimated factor. Many people seem not to differentiate between a screen reader and a speech synthesiser. Even accessibility and assistive technology specialists refer to a screen reader, they usually mean both a screen reader and speech synthesis. This makes sense in that no screen reader (unless used with a braille display only) would be functional without text-to-speech; but when talking about pronunciation and text interpretation, the difference between text-to-speech and screen readers is crucial. A speech synthesis or text-to-speech (TTS) is the voice used by the device. TTS is not exclusive to screen readers and other assistive technologies—when you ask Amazon Echo or Google Home about the time, weather or events in your calendar, it answers you using TTS.
What is the relationship between TTS and screen reader as far as text interpretation is concerned? It’s quite complex and often unpredictable. Let’s use an example: if you read with a screen reader “Hello, world!”, TTS is fully responsible for what you hear. The screen reader just sends the “Hello, world!” string to TTS. Of course, the output will be different for various TTS engines and voices. If you use a British English voice, you will hear “Hello, world!” with a British accent. If you use Australian English or U.S. English, you will hear the sentence read with Australian and U.S. accents, respectively. But regardless of the TTS and voice, you can be sure you will hear “Hello, world!”, even if you use different screen readers. The comma and exclamation mark will not be pronounced, but you will hear a short pause between “hello” and “world”, which is an expected behaviour when text with commas is read. However, when you set a screen reader to reading punctuation, you will get various outputs for various screen readers, even while using the same TTS engine. For instance, NVDA and JAWS (the two most popular screen readers) will read “Hello, world!” as “hello comma world bang” and “hello comma world exclaim”, respectively, using the same Microsoft Windows U.S. English voice. You get two different results, because one of the screen readers sends “bang” and the other sends “exclaim” to TTS as the verbal equivalent of “!”. It is worth noting that TTS engines have their own lexicons with pronunciation rules for particular languages, and these rules may also influence the way some strings of text are read; so on some occasions, if you cannot see the strings sent to TTS, it can be difficult to determine which technology is responsible for the final output.
On a basic level, when a screen reader encounters a date, the date is first processed by the screen reader and then processed by TTS. Even in the simplest scenario (i.e., if we assume that a screen reader does not try to interpret the date and sends the string to TTS), the outcome is not predictable. When you read “Feb.” with NVDA and JAWS, Espeak (NVDA TTS) and Eloquence (JAWS TTS) will read “feb”, whereas Microsoft voices and Vocalizer Expressive Premium voices will read “february”. In addition to this, some screen readers (such as JAWS) allow the user to decide to what degree the screen reader should try to interpret the date; depending on settings, one date may be read in four different ways, and there are more combinations possible depending on additional settings (e.g. reading punctuation level or processing digits by TTS).
Abbreviated day names are expanded by some TTS engines and not by others. What matters is not only the abbreviation but also the use of capital letters (some examples are given in the table in the next section).
Examples of abbreviated day names read by some TTS voices
for iOS UK
|SAT||s a t||s a t||sat||sat||s a t||s a t||sat||sat|
“01/05/2020” may be difficult to interpret if you don’t know what date format has been used—the U.S. or British one. Using a correct LANG value (“EN-GB” or “EN-US”) may help some screen reader users to hear the date correctly. For example, Narrator, a Microsoft screen reader, switches really well between reading U.S. and British dates if LANG is used.
Screen readers are usually programmed to switch voices to match the encoded language of the page. If the page has an incorrect LANG value, the page can be misinterpreted. For example, if a page in German has LANG set to “EN” or “EN-US” (developers sometimes forget to set the correct LANG), many screen reader users will hear the page read with an English-speaking voice, so the content would be unintelligible.
Of course, you should use a correct LANG even if the page does not contain any dates at all.
Another factor that may influence the way a date is read is system settings. For instance, JAWS does not take LANG into consideration while reading dates. JAWS sources its user preferences information from the regional settings in Windows. Changing the regional format in Windows to English United States will result in “01/05/2020” being read as mm/dd/yyyy.
Can the outcome be predicted?
We cannot predict how a screen reader will read a particular date, such as “01/05/2020” or “Feb. 3 2017”. Many screen reader users are familiar with the way their assistive technologies read dates, but there are situations when understanding the date requires closer attention and a lot of mental processing.
This article is about screen readers, but you shouldn’t forget that there is another group of people who may have difficulty understanding dates. There are various cognitive disorders which result in impaired understanding of abbreviated words, such as Alzheimer’s disease or those who have suffered a stroke. One assistive device for people with these disabilities is a clock that displays the full name of the day and month and uses “morning” and “afternoon” instead of AM and PM.
Whenever you can use an unabbreviated date format, do so. Even if most screen reader users understand standard abbreviated formats such as “05/04/2021” or “Fri, Jun 27, 1975”, there will always be some users who will have trouble, and even those who understand must usually pay closer attention while reading to interpret abbreviated dates correctly. And there are always people with cognitive disabilities who may find understanding the varying formats extremely difficult. Date and time are often crucial data, and therefore should be conveyed clearly.