More organizations are recognizing the importance of digital accessibility, and a variety of new resources are available to analyze conformance with the Web Content Accessibility Guidelines (WCAG), the consensus standards for accommodating people with disabilities.
Accessibility testing resources fall into two categories:
Automated tools are especially popular, since they’re widely available, inexpensive, and fast. With a quick scan, web developers can analyze their websites with open-source tools. Unfortunately, this can lead to inaccurate assumptions about a site’s overall accessibility.
In this article, we’ll discuss the limitations of manual and automated accessibility tests — and we’ll explain how a hybrid testing methodology can provide a comprehensive analysis of your website’s WCAG conformance.
Many WCAG success criteria are somewhat subjective. For instance, WCAG 1.1.1, “Non-text content,” requires websites to provide a text alternative for all non-text content. Text alternatives should provide users with the same information that they’d receive if they perceived the content visually — and a human will need to determine whether the text provides the same information as the non-text content.
Automated tools aren’t effective at making subjective decisions, nor do they experience a website in the same way as a human user. Some common issues that automated accessibility tests might miss:
Of course, the opposite is also true: Automated accessibility testing can find some issues that human testers might miss. For example, an automated tool can instantly find most color contrast issues, which can be beneficial for designers.
Ultimately, automated testing is useful, but limited. In one study, government accessibility advocates in the United Kingdom intentionally created a webpage with 142 accessibility barriers, then analyzed the page with 13 automated accessibility tools. The best-performing tool was only able to identify 40% of the barriers. The worst-performing tool found 13% of the barriers.
Read: Is Automated Testing Enough for Accessibility Compliance?
There’s another great reason to use human testing when evaluating accessibility: Automated tools can identify some barriers, but they can’t explain how those barriers affect real users' experiences.
Qualified human testers have experience with accessibility technologies, and they can provide actionable feedback that helps developers and designers create better content in the long-term. An experienced tester can tell you exactly why a certain remediation tactic will work. That can be invaluable information — you’ll spend less time fixing problems if you’re able to build with the best practices of accessibility in mind.
Read: Why Is A Web Accessibility Audit Important?
At the Bureau of Internet Accessibility, we utilize a four-point hybrid testing methodology. Our a11y® analysis platform checks the site against hundreds of WCAG Level A/AA guidelines, logging issues and remediation recommendations.
Next, a manual tester with a vision-related disability accesses the site using assistive technology. Our testers have certifications in JAWS and NVDA, two of the most popular screen readers, and each expert has five or more years of experience in accessibility testing.
Third, a subject matter expert (SME) performs a second round of human testing. Each WCAG outcome is reviewed and validated. Finally, a senior developer creates a comprehensive report. Our senior developers have extensive experience in accessible mobile app design and web design.
Related: A Look at Our Four-Point Hybrid Testing
We believe that by combining artificial intelligence with expert human oversight, we’re able to provide our clients with excellent guidance for meeting their accessibility goals. If you’re pursuing an accessibility initiative, regular testing is essential for monitoring your progress — and ensuring the best possible return on your investment.