Every accessibility program starts with an automated scanner. You paste a URL, a script traverses the DOM, and you get a list of violations ranked by WCAG level. The report is quick to produce and easy to triage. And depending on the tool you pick and the site you run it on, it catches somewhere between 30% and 57% of the real WCAG issues present on the page.
That's not the scanner vendors' dirty secret. It's documented. The open-source accessibility testing community has published on this for years — automated tooling is designed to flag the algorithmically decidable violations: missing alt text, insufficient color contrast ratios, unlabeled form inputs, heading order, landmark structure. The rules that a machine can evaluate with high confidence.
The other 43-70% — the manual-review half — is where accessibility programs actually live or die.
What automated scanners catch
The machine-decidable rules are valuable. They catch a lot of the highest-volume low-severity issues in one pass:
- Images without
altattributes - Form inputs missing labels (no
<label for>, noaria-label, noaria-labelledby) - Color contrast ratios below 4.5:1 for body text, 3:1 for large text
- Empty links, empty buttons, unlabeled icon controls
- Missing
langattribute, invalid HTML landmarks - Heading-level skips (jumping from
<h1>to<h3>) - Duplicate
idattributes
If you've never scanned your site, run any reputable WCAG scanner this week. You'll get a real list of real fixes. And the fixes will be boring and mechanical, which is what you want — they're cheap to close.
What automated scanners can't catch
The gap is everything where compliance depends on meaning, not syntax.
Label clarity
A form field can have a label attribute and still fail WCAG. <label>Date</label> is labeled — but which date? Check-in? Check-out? Date of birth? The scanner sees "label present, pass." A screen-reader user hears "date, edit" and has no idea what's being asked. WCAG 3.3.2 requires labels to identify purpose, not just exist.
Focus order + reading order
A well-marked-up page can tab through in an order that makes no sense to a keyboard user. CSS order, flex-direction: row-reverse, or position: absolute can visually rearrange content while leaving the source order — and therefore the tab order — intact. The scanner verifies focus indicators exist; it can't evaluate whether the flow is coherent. WCAG 2.4.3.
Error recovery
When a user submits a form with a mistake, can they actually figure out what went wrong and fix it? Scanners verify that error messages exist. They can't verify the messages are intelligible, visible in the right place, or associated with the right field. WCAG 3.3.3 requires suggestion of correction, which is a meaning-level check.
Heading structure that mirrors content
A scanner checks that headings don't skip levels. It can't check that the page's heading structure reflects its actual information architecture. Screen-reader users navigate by heading; a page where every section is <h2> regardless of hierarchy is worse than one that uses headings semantically, even though both pass the mechanical test.
Interactive patterns
Custom dropdowns, tabs, modals, carousels — these are authored with ARIA roles and states. Scanners verify the attributes are present. They can't verify the attributes reflect the widget's actual behavior. A button with aria-expanded="true" that doesn't actually expand anything is a scanner pass and a screen-reader disaster.
Reading order across languages and reading modes
WCAG 1.3.2 requires meaningful sequence. Multi-column layouts, RTL languages, and dynamic reflow at small viewport widths all introduce reading-order bugs that scanners don't model. This is the largest single category of missed issues on news and content sites.
Why the gap matters to your legal exposure
Demand letters citing the ADA and Section 508 have been accelerating for years. The plaintiff's bar uses the same scanners you do — they paste your URL, pull the machine-decidable violations, and build a complaint around the visible ones. But the settlement discussions invariably reveal the manual-review issues, because those are the ones real disabled users actually report to the plaintiff's attorneys. "My scanner says I'm compliant" is not a defense; "our compliance program included quarterly manual audits, interaction traces, and a remediation queue with a 30-day SLA" is.
That distinction — between scanned and reviewed — is the 43% gap, in legal terms.
How Parallax closes the gap
Parallax runs the machine-decidable layer first — contrast, labels, landmark structure — and then layers three things on top:
- Rendered-pixel contrast analysis. CSS-declared colors are one thing; actual rendered text on top of an image or gradient is another. Parallax renders the page and samples the pixels under the text. Real-world contrast, not computed-style contrast.
- Interaction traces. Parallax drives the page like a keyboard user — tab through, open menus, fill forms. Focus order, error-recovery flow, and widget behavior are traced and flagged when they diverge from the semantic markup.
- A human-reviewed issue queue. Issues the automated layer can't decide (label clarity, heading structure vs. information architecture, reading-order coherence) are routed to a reviewer. Each review attaches a diff with a concrete fix, sized for a single pull request.
The output is a compliance program, not a scanner report. Machine-decidable issues fixed, manual-review issues queued and closed, audit trail produced for every page and release.
Status
For professional accessibility auditing services, contact [email protected].
Morton Digital is the commercial product arm of Morton Technology Consulting LLC. See the shop →