If you’re building a new web accessibility testing policy, you’ll need to determine how you test your content.
The standards for digital accessibility are well established: The World Wide Web Consortium (W3C) publishes Web Content Accessibility Guidelines (WCAG), which have simple statements called success criteria for auditing websites, mobile apps, and other digital products. You can simply read those criteria, then ask whether they’re true.
But if you’re auditing a large website, manually testing your content against each success criterion may be impractical. Automated accessibility testing provides a convenient solution: By using artificial intelligence (A.I.), automated tools can instantly find many of the barriers that affect users with disabilities.
Of course, all automated tools have their limitations. Below, we’ll discuss those limitations — and provide tips for establishing a sustainable, accurate accessibility testing policy.
If an accessibility issue can be defined as “true" or “false,” it’s a good candidate for automated testing.
For example, WCAG requires that the visual presentation of text and images of text maintain a color contrast ratio of at least 4.5:1. This ensures that people with color vision deficiencies (also called color blindness) and other vision disabilities can read text against its background.
An automated tool can easily determine the color of the text and the color of the background, then compare the contrast ratio against WCAG standards. It’s a simple pass-or-fail ruleset — either the text conforms with WCAG, or it doesn’t.
Other examples of accessibility barriers with simple pass-or-fail rulesets include:
This isn’t a comprehensive list. AudioEye estimates that 70% of common accessibility issues can be detected with automation — and about two-thirds of those issues can be resolved automatically.
But remember, the goal is to address all barriers that could impact users. To conform with WCAG (and comply with non-discrimination laws), you shouldn’t rely on automated accessibility testing alone.
You’re building a website for human users — not for machines. Many accessibility issues require subjective decisions, and at this point, A.I. is not capable of subjective judgment.
Automated testing can (and should) be employed as part of a broader accessibility audit. But currently, automation has limits:
Once again, this isn’t a comprehensive list. Automated testing is a fantastic resource, and the technology continues to improve — but it’s not perfect. To use it effectively, you’ll need to carefully review the results of automated reports (and in most cases, you’ll need to perform additional manual tests).
Related: Web Accessibility Remediation: A Quick Guide for Getting Started
Manual testing is a more precise approach for identifying accessibility issues. However, it’s not always practical.
Modern websites often feature dynamic or frequently updated content. If you’re running a large eCommerce website, you might add dozens of pages a day or update product descriptions automatically. If you have a complex web application, testing each page may be impossible.
In those cases, automated testing is crucial. And even if you have a relatively simple website, automation can save you a lot of time — provided that you’ve got a process for manual remediations.
When building your testing strategy, keep these tips in mind:
If you’re ready to build a digital compliance strategy, we’re here to help. Get started with a free automated website accessibility analysis or send us a message to connect with an expert.