‘Does He Take Sugar?': The Risks of Standardising Easy-to-read Language
Posted by Brian Kelly on 19 December 2012
Back in September 2012 in a post entitled “John hit the ball”: Should Simple Language Be Mandatory for Web Accessibility? I described the W3C WAI’s Easy to Read activity and the online symposium on “Easy to Read” (e2r) language in Web Pages/Applications“.
The article highlighted the risks of mandating easy-to-read language and, following subsequent discussions with Alastair McNaught of JISC TechDis, led to a submission to the online symposium. Although reviewers of the paper commented that the submission provided “very sound ideas about how to approach e2r on level with other accessibility issues” and “The argument that the user perspective needs to be taken into account for discussing and defining “easy to read” makes a lot of sense” the paper was not accepted. Since the reviewers also suggested that “The authors should provide more material on how this step could be realized” and “More background on BS 8878 and a justification should be added” we decided to submit an expanded version of our paper to the current issue of the Ariadne Web magazine.
In subsequent discussions when preparing the paper I came across Dominik Lukeš, Education and Technology Specialist at Dyslexia Action, who has published research in the areas of language and education policy. Dominik’s blog posts, in particular a post on The complexities of simple: What simple language proponents should know about linguistics, were very relevant to the arguments which Alastair and myself had made in our original paper. I was therefore very pleased when Dominik agreed to contribute to an updated version of our paper. The paper, ‘Does He Take Sugar?': The Risks of Standardising Easy-to-read Language, has been summarised by Richard Waller in his editorial for the current issue of Ariadne:
In “Does He Take Sugar?”: The Risks of Standardising Easy-to-read Language, Brian Kelly, Dominik Lukeš and Alistair McNaught highlight the risks of attempting to standardise easy-to-read language for online resources for the benefit of readers with disabilities. In so doing, they address a long-standing issue in respect of Web content and writing for the Web, i.e. standardisation of language. They explain how in the wake of the failure of Esperanto and similar artificial tongues, the latest hopes have been pinned on plain English, and ultimately standardised English, to improve accessibility to Web content. Their article seeks to demonstrate the risks inherent in attempts to standardise language on the Web in the light of the W3C/WAI Research and Development Working Group (RDWG) hosting of an online symposium on the topic. They describe the aids suggested by the RDWG such as readability assessment tools, as well as the beneficiaries of the group’s aims, such as people with cognitive, hearing and speech impairments as well as with readers with low language skills, including readers not fluent in the target language. To provide readers further context, they go on to describe earlier work which, if enshrined in WCAG Guidelines would have had significant implications for content providers seeking to comply with WCAG 2.0 AAA. They interpret what is understood in terms of ‘the majority of users’ and the context in which content is being written for the Web. They contend that the context in which transactional language should be made as accessible to everyone as possible differs greatly from that of education, where it may be essential to employ the technical language of a particular subject, as well as figurative language, and even on occasions, cultural references outside the ordinary. They argue that attempts to render language easier to understand, by imposing limitations upon its complexity, will inevitably lose sight of the nuances that form part of language acquisition. In effect they supply a long list of reasons why the use and comprehension of language is considerably more complex than many would imagine. However, the authors do not by any means reject out of hand the attempt to make communication more accessible. But they do highlight the significance of context. They introduce the characteristics that might be termed key to Accessibility 2.0 which concentrate on contextualising the use of content as opposed to creating a global solution, instead laying emphasis on the needs of the user. They proceed to detail the BS 8878 Code of Practice 16-step plan on Web accessibility and indicate where it overlaps with the WCAG guidelines. Having provided readers with an alternative path through the BS 8878 approach, they go on to suggest further research in areas which have received less attention from the WCAG guidelines approach. They touch upon the effect of lengthy text, figurative language, and register, among others, upon the capacity of some readers to understand Web content. The authors’ conclusions return to an interesting observation on the effect of plain English which might not have been anticipated – but is nonetheless welcome.
The article is of particular relevance since it brings home very clearly the limitations of WAI’s approach to Web accessibility and the belief that universal accessibility can be obtained by simply following a set of rules documented in the WCAG guidelines. As we’ve explained in the article, this isn’t the case for the language used in Web pages. However although the approach developed by WAI has significant flaws, the BS 8878 Code of Practice enables guidelines developed by WAI and other organisations to be used in a more pragmatic fashion. We hope that the experiences in using this Code of Practice described by EA Draffan in her talk on Beyond WCAG: Experiences in Implementing BS 8878 at the IWMW 2012 event help in the promoting greater use of this approach, including use of the standard to address the readability of Web pages.