Accessibility Testing: Beyond Compliance
Accessibility testing has traditionally been framed as a compliance exercise. Check the WCAG boxes, pass an audit, and move on. But for startups building products that serve a broad user base, that framing misses the bigger picture. Roughly 16 percent of the global population lives with some form of disability, according to the World Health Organization. That is not a niche. That is a market segment larger than the population of the European Union, and every one of those people is evaluating your product against their ability to use it.
Why accessibility testing matters beyond legal requirements
The compliance angle is real. The Americans with Disabilities Act, Section 508, and the European Accessibility Act all create legal obligations for digital products. Lawsuits against companies for inaccessible websites increased by over 300 percent between 2018 and 2024 in the United States alone. No startup wants to be on the wrong end of that trend.
But compliance is the floor, not the ceiling. The business case goes further. Microsoft's inclusive design research found that accessibility improvements benefit far more users than the target population. Captions help people in noisy environments. Keyboard navigation helps power users. High-contrast modes help people using devices in direct sunlight. When you improve accessibility, you improve usability for everyone.
There is also a retention argument. Users who encounter accessibility barriers do not file complaints. They leave. A WebAIM survey of screen reader users found that 71 percent of respondents simply abandoned a website when they encountered significant accessibility problems. Those are customers you will never hear from, because they gave up before reaching your support channel.
The four layers of accessibility testing
Effective accessibility testing is not a single activity. It operates across multiple layers, each catching different classes of issues. Teams that rely on only one layer will miss problems that the others would catch.
- Automated scanning: Tools like axe-core, Lighthouse, and WAVE can detect approximately 30 to 40 percent of WCAG issues automatically. These include missing alt text, insufficient color contrast, missing form labels, and incorrect heading hierarchy. Automated scans are fast, repeatable, and should run in CI on every build.
- Manual code review: A developer or accessibility specialist reviews the markup for semantic correctness. Is a button element used for clickable actions, or is a div with an onClick handler doing the job? Are ARIA attributes used correctly, or are they masking underlying structural problems? Automated tools cannot evaluate semantic intent, only syntactic presence.
- Assistive technology testing: Testing with actual screen readers (VoiceOver, NVDA, JAWS), keyboard-only navigation, and switch controls. This is where the user experience becomes real. A page might pass every automated check and still be unusable with a screen reader because the reading order is nonsensical or interactive elements lack meaningful announcements.
- User testing with people who have disabilities: The highest-fidelity testing involves real users navigating your product with their preferred assistive technology. This catches issues that even skilled QA testers miss, because the testers do not share the same daily experience with the tools and the workarounds that real users develop over time.
Most startups can implement the first two layers immediately. The third requires some training but is achievable with existing team members. The fourth is valuable for major releases and can be sourced through organizations that specialize in disability user research.
Integrating accessibility into your development workflow
The most expensive way to handle accessibility is to bolt it on at the end. An accessibility audit on a finished product generates a list of findings that often require significant refactoring because the underlying component architecture was not designed with accessibility in mind. A button that needs to be replaced with a different element type might require changes to styles, event handlers, and test selectors across dozens of files.
The cheaper approach is to build accessibility into the development process from the start. This does not require heroic effort. It requires small, consistent practices applied throughout the development cycle.
During story creation, include accessibility acceptance criteria. For a new form, that might mean: "All form fields have visible labels. Error messages are associated with the correct field via aria-describedby. The form is completable using only a keyboard." These criteria give developers a clear target and give testers a clear verification plan.
During development, run axe-core as a browser extension while building components. Most issues are trivial to fix when you catch them during development (add an alt attribute, fix a contrast ratio, use the correct semantic element) but expensive to fix after the component is used in 20 places. Integrating accessibility checks into your CI/CD pipeline catches regressions automatically and prevents accessibility debt from accumulating silently.
During testing, dedicate a portion of each exploratory session to keyboard-only navigation. Tab through every interactive element on the page. Can you reach all controls? Can you see where focus is? Can you operate dropdowns, modals, and menus without a mouse? This five-minute check catches a surprisingly high percentage of real-world accessibility barriers.
Common accessibility bugs and their impact
The WebAIM Million study, which analyzes the top one million websites annually, consistently finds the same categories of errors dominating the results. In 2025, the average website had 50.8 detectable accessibility errors per page. The most common issues are also some of the easiest to fix.
- Missing alternative text (58% of pages): Images without alt attributes are invisible to screen reader users. Adding descriptive alt text takes seconds per image.
- Low contrast text (83% of pages): Text that does not meet the 4.5:1 contrast ratio against its background is difficult to read for users with low vision and anyone using a device in bright conditions. Design systems should enforce contrast ratios at the token level.
- Missing form labels (46% of pages): Form inputs without associated labels leave screen reader users guessing what information to enter. Every input element needs a visible, programmatic label.
- Empty links and buttons (44% of pages): Interactive elements with no text content announce nothing to assistive technology. Icon-only buttons need aria-label attributes that describe their function.
These four categories account for the majority of automated accessibility findings. Fixing them moves a product from unusable to functional for assistive technology users. The deeper issues around focus management, dynamic content announcements, and complex widget patterns require more expertise, but the basics are accessible to any development team willing to invest the effort.
Measuring accessibility as a quality metric
What gets measured gets managed. Teams that treat accessibility as a tracked quality metric improve steadily. Teams that treat it as a periodic audit scramble to fix things once a year and regress in between.
Practical accessibility metrics for a startup include the number of automated accessibility violations per page (tracked over time in CI), the percentage of user stories with accessibility acceptance criteria, and the number of keyboard navigation blockers in the current release. These are simple to collect and provide a clear trend line.
For teams already tracking broader quality metrics, accessibility fits naturally into the existing framework. Escaped accessibility defects are a subset of the QA metrics leaders should track. They represent bugs that a reasonable testing process should have caught, and tracking them separately highlights whether accessibility is getting adequate attention in the testing cycle.
Building accessibility into your testing practice
Accessibility testing does not require a dedicated accessibility team, especially at startups. It requires awareness, a few tools, and the discipline to include accessibility in definition-of-done criteria alongside functional correctness and performance.
Start with automated scanning in CI to catch the low-hanging fruit. Add keyboard navigation checks to your exploratory testing sessions. Educate developers on semantic HTML and ARIA usage through code reviews. These three practices, combined, catch the majority of accessibility issues without requiring specialized tooling or training.
For teams that want structured accessibility coverage without building internal expertise from scratch, a managed QA service can include accessibility testing as part of the regular testing cycle. This means every sprint gets accessibility verification alongside functional testing, rather than accessibility being an occasional afterthought.
The goal is not perfection on day one. The goal is steady improvement, tracked with metrics, built into the workflow, and treated as what it is: a core dimension of software quality that affects real users every day. If you are ready to make accessibility part of your regular testing practice, take a look at how Pinpoint includes accessibility in every testing cycle.
Ready to level up your QA?
Book a free 30-minute call and see how Pinpoint plugs into your pipeline with zero overhead.