CodeCosts

AI Coding Tool News & Analysis

AI Coding Tools for Accessibility Engineers 2026: WCAG, ARIA, Screen Readers & Automated A11y Testing Guide

Your backlog has 340 WCAG violations from the latest axe-core scan, 12 of them critical. Legal just forwarded a demand letter citing ADA Title III. The design team shipped a new component library last sprint with zero accessibility documentation, and three of the seven new components trap keyboard focus. The marketing site has decorative images with alt text that says “image1.png” and form inputs with no associated labels. A screen reader user reported that the checkout flow is completely unusable because live regions are not announcing cart updates. This is a normal week for an accessibility engineer.

Most AI coding tool reviews test whether a tool can write a React component or generate a REST endpoint. That tells you nothing about whether it can generate correct ARIA attributes for a custom combobox, reason about focus management in a single-page application, produce color combinations that meet WCAG 2.2 AA contrast ratios, or write axe-core integration tests that catch regressions before they ship.

This guide evaluates every major AI coding tool through the lens of what accessibility engineers actually do. Not frontend engineering broadly (building UI), not QA (testing generally). Accessibility engineering: ensuring digital products are perceivable, operable, understandable, and robust for all users, including those who use assistive technologies.

TL;DR

Best free ($0): GitHub Copilot Free — generates semantic HTML patterns and basic ARIA attributes; 2,000 completions/mo covers focused a11y remediation work. Best for remediation ($20/mo): Claude Code — strongest at reasoning through complex ARIA patterns, focus management logic, WCAG success criteria interpretation, and explaining why something fails, not just what to fix. Best for implementation ($20/mo): Cursor Pro — codebase-aware component refactoring, enforces a11y patterns via .cursorrules, multi-file fixes when updating shared components. Best combo ($20/mo): Claude Code + Copilot Free — Claude Code for a11y architecture decisions and complex ARIA, Copilot for inline completions during remediation.

Why Accessibility Engineering Is Different

Accessibility engineers evaluate AI tools differently from most engineering roles. You are not just writing code that works visually. You are ensuring code works for users who cannot see the screen, cannot use a mouse, process information differently, or use assistive technologies that interact with the DOM in ways most developers never consider. Here is what matters:

  • Semantic HTML is the foundation, not a nice-to-have: Most AI tools default to <div> and <span> for everything, then bolt on ARIA to compensate. This is fundamentally wrong. A <button> gives you keyboard interaction, focus management, and screen reader announcement for free. A <div role="button" tabindex="0"> requires you to manually implement Enter and Space key handlers, focus styles, and the correct ARIA states. You need AI tools that reach for native HTML elements first and only use ARIA when native semantics are genuinely insufficient.
  • ARIA is a contract, not a decoration: Adding role="tab" to an element is a promise to assistive technology that this element behaves like a tab. That means it must be inside a tablist, it must have aria-selected, arrow keys must move between tabs, and the associated tabpanel must have aria-labelledby pointing back. Most AI tools add ARIA roles without implementing the required keyboard interaction patterns or state management. Incomplete ARIA is worse than no ARIA — it tells screen readers to expect behavior that does not exist.
  • Focus management is invisible but critical: When a modal opens, focus must move into it. When it closes, focus must return to the trigger. When a list item is deleted, focus must move to a logical next element, not get lost on the <body>. When a single-page app navigates, focus must move to the new content or a skip link. AI tools that generate modals, dialogs, and dynamic content without focus management create experiences where keyboard and screen reader users get stranded.
  • Color is not information: “Click the red button” is meaningless to a colorblind user. Error states communicated only through color fail for 8% of men. Graphs that use color alone to distinguish data series are useless to millions. You need AI tools that understand WCAG 1.4.1 (Use of Color) and generate components that convey information through multiple channels: text, icons, patterns, and position, not just color.
  • Testing requires assistive technology knowledge: An axe-core scan catches about 30-40% of accessibility issues. The rest require manual testing with screen readers (VoiceOver, NVDA, JAWS), keyboard-only navigation, zoom to 200-400%, and Windows High Contrast Mode. You need AI tools that can generate test cases that go beyond automated scanning and describe manual testing procedures for the issues automation cannot catch.
  • Legal compliance is not optional: WCAG 2.1 AA is the de facto legal standard in most jurisdictions. The EU’s European Accessibility Act (EAA) takes effect June 2025. ADA Title III lawsuits hit 4,600+ in 2024. You need AI tools that understand the difference between WCAG A, AA, and AAA, know which success criteria apply to which content types, and can map violations to specific WCAG references — not just say “this needs better accessibility.”

Accessibility Engineer Task Support Matrix

Here is how each tool performs on the six core tasks that define an accessibility engineer’s work:

Task Copilot Cursor Windsurf Claude Code Amazon Q Gemini CLI
Semantic HTML & ARIA Good Good Fair Excellent Fair Good
Focus Management Fair Good Fair Good Poor Fair
Color & Contrast Fair Fair Fair Good Poor Good
Automated A11y Testing Good Good Fair Excellent Fair Fair
WCAG Compliance Mapping Poor Fair Poor Excellent Poor Good
Remediation Workflows Good Excellent Fair Excellent Fair Fair

Semantic HTML & ARIA Pattern Generation

The first rule of ARIA is: do not use ARIA if you can use native HTML. The second rule is: if you must use ARIA, implement the full pattern. Most AI tools violate both.

What to test: Ask the tool to build a custom dropdown (combobox), a tab interface, an accordion, a dialog, and a data table with sorting. These five patterns cover the majority of ARIA complexity in real applications.

Claude Code is the strongest here. When asked to build a combobox, it reaches for <input> with role="combobox", aria-expanded, aria-activedescendant, aria-controls pointing to the listbox, and implements the full keyboard pattern: arrow keys to navigate options, Enter to select, Escape to close, and Home/End for first/last option. It generates the associated role="listbox" with role="option" children and manages aria-selected state. Critically, it explains why each attribute is needed, which helps you verify correctness.

Cursor Pro generates correct ARIA patterns when you have a .cursorrules file that specifies your a11y conventions. Without it, quality varies. With rules like “all interactive components must follow WAI-ARIA Authoring Practices 1.2” and “prefer native HTML over ARIA roles,” Cursor produces consistently good output and enforces patterns across your codebase.

Copilot generates decent semantic HTML and basic ARIA. It adds aria-label and role attributes correctly for simple patterns. For complex widgets like comboboxes and tree views, it frequently misses attributes (aria-activedescendant omitted, aria-expanded not toggled) or implements incomplete keyboard handlers.

Windsurf and Amazon Q tend to over-rely on <div> with ARIA roles rather than using native elements. You will frequently need to refactor <div role="button"> back to <button>. Amazon Q is better when working with AWS Amplify UI components, which have built-in accessibility, but for custom components it struggles.

Focus Management in Dynamic Applications

Focus management is where most AI tools fail hardest, because it requires understanding user intent across time, not just generating static markup.

Modal and Dialog Focus Trapping

A correctly implemented modal must: (1) move focus to the first focusable element (or the dialog itself) on open, (2) trap Tab and Shift+Tab within the modal boundaries, (3) close on Escape, (4) return focus to the trigger element on close, and (5) prevent interaction with content behind the modal via aria-hidden="true" on the rest of the page or inert attribute.

Claude Code generates modals that implement all five requirements. It uses the native <dialog> element when appropriate (which handles focus trapping and backdrop natively) and falls back to a custom implementation with a focus trap when <dialog> support is insufficient. It remembers to set inert on sibling containers and restores focus on close.

Cursor Pro handles focus trapping well when you are working in a React codebase with existing patterns to reference. It picks up focus trap utilities from your existing code and applies them consistently to new modals. Without existing patterns, it sometimes forgets focus restoration on close.

Copilot generates modals that move focus on open but frequently misses focus restoration on close. The keyboard user opens a modal, does something, closes it, and focus is now on <body> — they have to Tab through the entire page to get back to where they were. This is one of the most common a11y failures in production.

Single-Page Application Route Changes

When a SPA navigates to a new “page,” the browser does not perform a full page load, so the screen reader does not announce the new content. You need to: (1) move focus to a heading or landmark at the top of the new content, (2) announce the page change via a live region or document title update, and (3) ensure the browser’s back button returns focus to the correct position.

Claude Code understands this problem and generates route change handlers that update document.title, move focus to an <h1> with tabindex="-1", and optionally use an aria-live="polite" region to announce navigation. It knows the difference between React Router, Next.js, and Vue Router focus management patterns.

Most other tools treat SPA navigation as a routing concern, not an accessibility concern, and generate no focus management code for route changes.

Dynamic Content Updates

When content updates without a page reload — a chat message arrives, a form validates, a notification appears — screen readers need to be informed. This requires aria-live regions with the correct politeness level (polite for non-urgent updates, assertive for errors and time-sensitive information) and aria-atomic to control whether the entire region or just the change is announced.

Claude Code correctly distinguishes between aria-live="polite" and aria-live="assertive" and explains when to use each. It knows that live regions must exist in the DOM before content is injected (a common bug: creating a live region and immediately adding content means the first update is not announced). It generates the pattern of rendering an empty live region on mount and updating it when events occur.

Cursor and Copilot add aria-live attributes when prompted but often use assertive for everything (which is disruptive for non-critical updates) and sometimes create live regions dynamically, which means the first announcement is swallowed.

Color Contrast & Visual Accessibility

WCAG 2.2 requires a minimum contrast ratio of 4.5:1 for normal text and 3:1 for large text (18px+ bold or 24px+ regular) at AA level. Non-text elements (icons, borders, focus indicators) require 3:1 against adjacent colors.

Contrast Ratio Calculation

Claude Code can calculate contrast ratios when given two hex, RGB, or HSL values and tell you whether they pass AA, AAA, or fail. It understands relative luminance calculations and can suggest the nearest color that passes a given threshold. When generating CSS, it flags combinations that are likely to fail — light gray text on white backgrounds, for example.

Gemini CLI handles contrast calculations well due to its large context window, which lets you paste an entire CSS file and ask it to audit all color combinations. It identifies pairs that fail WCAG AA and suggests alternatives.

Most other tools do not proactively flag contrast issues. They generate whatever colors you ask for or that exist in your codebase, regardless of whether the combinations meet WCAG requirements.

Dark Mode Accessibility

Dark mode is not just inverting colors. Common failures include: focus indicators that are visible on light backgrounds but invisible on dark, disabled state colors that lose sufficient contrast, link colors that pass on white but fail on dark gray, and shadows that provide visual separation in light mode but disappear in dark mode.

Claude Code generates dark mode implementations that use CSS custom properties with separate light and dark tokens, ensuring each token pair meets contrast requirements independently. It knows to test focus indicators (:focus-visible outlines) against both light and dark backgrounds.

Cursor Pro is effective here if your codebase already has a design token system. It extends existing token patterns consistently. Without tokens, it tends to use filter: invert(1) or simple color swaps that break contrast.

High Contrast Mode and Forced Colors

Windows High Contrast Mode (now “Forced Colors” in CSS) overrides your colors with system colors. Custom focus indicators, borders, and icons that rely on specific colors disappear. You need to use @media (forced-colors: active) to provide fallbacks.

Claude Code is the only tool that consistently generates forced-colors media queries when building custom components. It knows that transparent borders become visible in forced-colors mode (useful for custom checkbox/radio indicators) and that SVG icons need currentColor fill to adapt.

All other tools ignore forced-colors mode entirely unless explicitly prompted.

Keyboard Navigation & Interaction Patterns

Every interactive element must be operable with a keyboard alone. This means: focusable, has visible focus indicators, responds to expected keys (Enter/Space for buttons, arrow keys for menus/tabs/listboxes), and follows the WAI-ARIA Authoring Practices Guide (APG) keyboard patterns.

Tab Order and Skip Navigation

Tab order should follow visual reading order. Positive tabindex values (1, 2, 3...) are almost always wrong — they create a parallel tab order that confuses users. Skip navigation links let keyboard users bypass repetitive content (navigation bars) to reach main content.

Claude Code never generates positive tabindex values and explains why they are harmful. When building page layouts, it includes skip navigation links and landmark regions (<main>, <nav>, <aside>) that screen reader users can navigate directly.

Copilot occasionally generates tabindex="1" or tabindex="2" when trying to make elements focusable. This is a bug, not a feature. Always replace with tabindex="0" (natural tab order) or tabindex="-1" (programmatically focusable only).

Roving Tabindex vs. aria-activedescendant

For composite widgets (tab lists, menus, toolbars), there are two keyboard navigation patterns: roving tabindex (move tabindex="0" between children, others get tabindex="-1") and aria-activedescendant (container keeps focus, points to the visually active child). Each has trade-offs.

Claude Code knows both patterns, explains the trade-offs (roving tabindex works better with screen readers that have browse mode; aria-activedescendant is simpler to implement for complex widgets), and lets you choose. It implements whichever pattern you select correctly.

Most other tools default to roving tabindex for everything, which is usually fine but can be problematic for widgets where the container needs to handle other keyboard events (like a rich text editor toolbar).

Custom Keyboard Shortcuts

WCAG 2.1 SC 2.1.4 (Character Key Shortcuts) requires that single-character keyboard shortcuts can be turned off, remapped, or are only active when a component has focus. AI tools frequently generate single-key shortcuts (press “d” to delete, “e” to edit) without any mechanism to disable them, which interferes with screen reader commands and speech input.

Claude Code flags this issue when generating keyboard shortcut handlers and suggests modifier keys (Ctrl+D instead of D alone) or focus-scoped shortcuts. Other tools generate bare key handlers without considering this WCAG requirement.

Automated Accessibility Testing

Automated testing catches 30-40% of WCAG violations. The rest require manual testing. But that 30-40% includes the highest-volume issues: missing alt text, missing form labels, color contrast failures, missing landmark regions, and duplicate IDs. Catching these automatically prevents regression and frees you to focus on the harder manual testing.

axe-core Integration

Claude Code generates comprehensive axe-core test setups for any framework. For React Testing Library, it integrates jest-axe with custom configuration that excludes rules you have intentionally suppressed (with documentation of why). For Cypress, it sets up cypress-axe with cy.checkA11y() calls and custom impact level thresholds. For Playwright, it integrates @axe-core/playwright with per-page and per-component scanning.

Example Claude Code output for React Testing Library:

import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';

expect.extend(toHaveNoViolations);

describe('SearchCombobox accessibility', () => {
  it('has no axe violations in default state', async () => {
    const { container } = render(<SearchCombobox />);
    const results = await axe(container);
    expect(results).toHaveNoViolations();
  });

  it('has no axe violations when expanded', async () => {
    const { container, getByRole } = render(
      <SearchCombobox options={mockOptions} />
    );
    await userEvent.click(getByRole('combobox'));
    const results = await axe(container);
    expect(results).toHaveNoViolations();
  });

  it('has no axe violations with selection', async () => {
    const { container, getByRole } = render(
      <SearchCombobox options={mockOptions} />
    );
    await userEvent.click(getByRole('combobox'));
    await userEvent.click(getByRole('option', { name: 'React' }));
    const results = await axe(container);
    expect(results).toHaveNoViolations();
  });
});

Cursor Pro generates similar test code but adapts to whatever testing framework exists in your project. If you already use Playwright, it adds axe scans to existing test files. If you use Cypress, it extends your existing commands. This codebase-awareness makes it faster for adding a11y tests to established projects.

Copilot generates basic axe-core tests but often misses testing different component states. A combobox needs to be tested closed, open, with a selection, and in error state — Copilot usually only tests the default render.

CI Pipeline Integration

Accessibility tests must run in CI to prevent regression. This means: axe-core scans on every PR, Lighthouse accessibility audits on key pages, and optionally pa11y-ci for crawling multi-page sites.

Claude Code generates complete CI configurations that include axe-core in the test suite, Lighthouse CI with accessibility score thresholds (fail the build if the score drops below 90), and reporting that surfaces specific WCAG violations in PR comments.

Cursor integrates well with existing CI configurations, adding a11y test steps to whatever pipeline you already have. It is less opinionated about the specific tools but adapts to your existing setup effectively.

Manual Testing Checklists

For the 60-70% of issues that automated tools miss, you need manual testing procedures. Claude Code generates structured manual testing checklists organized by WCAG principle (Perceivable, Operable, Understandable, Robust) that include specific steps:

  • “Navigate the entire checkout flow using only Tab, Shift+Tab, Enter, Space, and Escape. Verify focus is visible at every step.”
  • “Enable VoiceOver (Cmd+F5 on Mac). Navigate to the search combobox using VO+Right Arrow. Verify the role, name, and state are announced correctly.”
  • “Zoom the browser to 400%. Verify no content is clipped, overlapping, or requires horizontal scrolling.”
  • “Enable Windows High Contrast Mode. Verify all interactive elements remain visible and focus indicators are distinguishable.”

No other tool generates this level of manual testing guidance. Most produce only automated test code.

WCAG Compliance Mapping & Audit Support

When remediating a11y issues or responding to an audit, you need to map each violation to a specific WCAG success criterion, explain the impact, and prioritize fixes.

Violation-to-WCAG Mapping

Claude Code excels here. Give it an axe-core violation report or describe an issue, and it maps it to the exact WCAG 2.2 success criterion, level (A, AA, AAA), and principle. For example:

  • “Missing alt text on product image” → WCAG 1.1.1 Non-text Content (Level A) — “All non-text content that is presented to the user has a text alternative that serves the equivalent purpose. For decorative images, use alt="" and role="presentation". For informative images, describe the content and function.”
  • “Form submits with no error feedback” → WCAG 3.3.1 Error Identification (Level A) + 3.3.3 Error Suggestion (Level AA) — “Errors must be identified and described in text. If the error can be corrected, suggestions must be provided.”

Gemini CLI also performs well on WCAG mapping due to its large context — you can paste entire audit reports and get structured mappings back. However, it is less precise on the nuances of specific success criteria.

Other tools can identify that something is “an accessibility issue” but rarely map it to the specific WCAG criterion, which is what auditors and legal teams require.

VPAT and ACR Generation Support

Voluntary Product Accessibility Templates (VPATs) and Accessibility Conformance Reports (ACRs) document how a product meets WCAG, Section 508, and EN 301 549. Claude Code can help structure VPAT responses by generating per-criterion assessments based on your codebase analysis, though you should always have an accessibility specialist review the final document.

Remediation Workflows

Most accessibility work is not building new accessible components — it is fixing existing inaccessible ones. This is where codebase-aware tools have a significant advantage.

Bulk Remediation

Cursor Pro is strongest for bulk remediation. Give it an axe-core report with 50 “images without alt text” violations across 30 files, and it can fix them in a single multi-file operation. Its codebase awareness means it understands your component patterns — if your <ProductCard> component renders images, it fixes the component once rather than fixing each instance.

Claude Code is equally effective but works differently. It analyzes the pattern of violations, identifies the root cause (usually a shared component), and suggests fixing the source rather than the symptoms. For 50 missing alt text violations, it finds the three components that render images and adds required alt props with TypeScript enforcement.

Copilot handles file-by-file remediation well but cannot coordinate multi-file fixes. You fix each file individually, which is slow for large-scale remediation.

Component Library Accessibility Retrofit

When a design team ships a component library without accessibility, you need to retrofit ARIA, keyboard handling, and focus management into every component. This is one of the most time-consuming accessibility tasks.

Claude Code analyzes a component and generates a complete accessibility retrofit: adds ARIA attributes, implements keyboard handlers, adds focus management, and writes tests. For a custom select component, it adds role="listbox", aria-expanded, aria-activedescendant, keyboard navigation (arrow keys, Home, End, type-ahead), and focus trapping — essentially rebuilding the interactive behavior that a native <select> provides for free.

Cursor Pro with a comprehensive .cursorrules file enforces accessibility requirements as components are being built, preventing the retrofit problem in the first place. This is the more cost-effective approach: require accessibility from the start.

Head-to-Head: 12 Accessibility Engineering Tasks

Task Best Tool Why
Build accessible combobox from scratch Claude Code Full WAI-ARIA APG pattern with keyboard handling and state management
Fix 50+ axe-core violations across codebase Cursor Pro Multi-file refactoring, fixes root component rather than each instance
Add focus management to SPA navigation Claude Code Understands framework-specific routing and focus patterns
Write axe-core + Playwright test suite Claude Code Tests all component states, not just default render
Audit color contrast across entire CSS Gemini CLI Large context fits full stylesheets; identifies all failing pairs
Retrofit keyboard navigation into component library Claude Code Generates complete keyboard patterns per WAI-ARIA APG
Generate WCAG compliance mapping for audit Claude Code Maps violations to specific success criteria with levels
Implement dark mode with a11y Cursor Pro Extends existing token systems, enforces contrast per theme
Add screen reader announcements to chat UI Claude Code Correct aria-live region implementation with proper timing
Fix focus trap in modal component Claude Code Handles edge cases: focus restoration, inert attribute, nested modals
Add a11y tests to existing CI pipeline Cursor Pro Adapts to existing CI config, adds a11y steps without restructuring
Implement forced-colors mode support Claude Code Only tool that generates forced-colors media queries proactively

Cost Analysis: What Accessibility Engineers Actually Pay

Tier Stack Monthly Best For
Free Copilot Free $0 Developers adding basic a11y to their own code
Free Gemini CLI Free $0 CSS contrast audits, large file analysis
Starter Copilot Pro $10 More completions for sustained remediation work
Pro A11y Claude Code $20 Full-time accessibility engineers — ARIA, WCAG mapping, testing
Pro A11y Cursor Pro $20 Bulk remediation, codebase-wide a11y enforcement
Combo Claude Code + Copilot Free $20 Best overall: design + implementation + remediation
Max Claude Code + Cursor Pro $40 Large-scale remediation + complex ARIA + WCAG audit support

Workflow Patterns for Accessibility Engineers

Pattern 1: The Remediation Sprint

You have an axe-core audit with 200+ violations and two weeks to fix them.

  1. Triage with Claude Code: Feed the violation report. Claude Code groups violations by root cause (e.g., 40 missing alt texts trace to 3 shared image components), maps each to WCAG criteria, and prioritizes by impact (Level A before AA, critical before moderate).
  2. Bulk fix with Cursor Pro: Fix the root-cause components. Cursor’s multi-file editing handles propagating changes across the codebase.
  3. Verify with Claude Code: Generate axe-core tests for each fixed component in all its states. Add to CI.
  4. Manual testing list from Claude Code: For issues automation cannot catch, generate a manual testing checklist organized by page and interaction type.

Pattern 2: The Proactive A11y Reviewer

You review every PR for accessibility before it merges.

  1. Set up .cursorrules or CLAUDE.md with your a11y requirements: “All interactive elements must be keyboard operable,” “All images must have alt text,” “ARIA roles must follow WAI-ARIA APG patterns.”
  2. PR diff review with Claude Code: Paste the diff. Claude Code identifies a11y issues in new code: missing labels, incorrect ARIA, broken keyboard patterns, color contrast concerns.
  3. Auto-fix with Cursor: Apply fixes directly to the PR branch using Cursor’s codebase-aware editing.

Pattern 3: The Design System A11y Lead

You own accessibility for a shared component library used across multiple teams.

  1. Component specification with Claude Code: For each new component, generate the complete a11y specification: ARIA roles, states, properties, keyboard interaction pattern, focus management behavior, and screen reader announcement expectations.
  2. Implementation with Cursor Pro: Build components with the spec as context in .cursorrules. Cursor enforces the patterns across all component variants.
  3. Testing with Claude Code: Generate comprehensive test suites covering all states, keyboard interactions, and screen reader announcements. Include both axe-core automated tests and manual testing checklists.

Pattern 4: The Compliance Engineer

You maintain WCAG conformance documentation and respond to legal/audit inquiries.

  1. Audit with Claude Code: Analyze the codebase against WCAG 2.2 AA success criteria. Generate a structured report mapping each criterion to its conformance status.
  2. VPAT drafting with Claude Code: Generate per-criterion assessments for the VPAT/ACR. Claude Code produces specific, evidence-based statements rather than generic “supports”/“does not support.”
  3. Remediation tracking: For each non-conformant criterion, generate a remediation plan with effort estimates and code-level fix descriptions.

Rules Files: Enforcing Accessibility Standards

Example .cursorrules for Accessibility Engineering

# Accessibility Engineering Rules

## HTML & Semantics
- ALWAYS use native HTML elements before ARIA (button, not div[role=button])
- All images MUST have alt text. Decorative images: alt="" role="presentation"
- All form inputs MUST have associated labels (for/id or aria-labelledby)
- Tables MUST have caption or aria-label and th with scope attributes
- Do NOT use positive tabindex values (1, 2, 3) — only 0 or -1

## ARIA
- Follow WAI-ARIA Authoring Practices Guide 1.2 for all widget patterns
- ARIA roles MUST have all required states and properties
- aria-live regions must exist in DOM before content is injected
- Use aria-live="polite" for non-urgent updates, "assertive" only for errors
- aria-expanded, aria-selected, aria-checked must toggle correctly

## Keyboard
- All interactive elements must be keyboard operable
- Modals: trap focus, close on Escape, restore focus to trigger on close
- Tab lists: arrow keys between tabs, Tab moves to panel
- Menus: arrow keys navigate, Enter/Space select, Escape closes
- Single-character shortcuts must have modifier keys or be focus-scoped

## Visual
- Color contrast: 4.5:1 for normal text, 3:1 for large text (WCAG AA)
- Non-text contrast: 3:1 for UI components and graphics
- Never use color alone to convey information
- Focus indicators: visible, 3:1 contrast against adjacent colors
- Support @media (forced-colors: active) for custom components

## Testing
- Every component must have axe-core tests for all interactive states
- Test keyboard navigation in every component test
- Include aria-role assertions in component tests

Example CLAUDE.md for Accessibility Engineering

# Accessibility Project Context

## Stack
- Framework: React 18 with TypeScript
- Component Library: Custom design system (ds-components)
- Testing: Playwright + @axe-core/playwright, Jest + jest-axe
- CI: GitHub Actions with Lighthouse CI (a11y threshold: 95)
- Standards: WCAG 2.2 AA conformance required

## Accessibility Rules
- All components must follow WAI-ARIA APG 1.2 patterns
- Native HTML elements first, ARIA only when native is insufficient
- Every interactive component must have keyboard tests
- axe-core tests required for all states (default, open, error, disabled)
- Focus management: document in component JSDoc where focus moves
- Color tokens: every foreground/background pair must meet 4.5:1

## When I Ask to Fix an A11y Issue
1. Identify the WCAG success criterion being violated
2. Explain why it fails (not just what to change)
3. Fix the root cause (shared component), not individual instances
4. Add axe-core test covering the fixed issue
5. Add manual testing step if automated coverage is insufficient

## When I Ask for a New Component
1. List all ARIA roles, states, and properties needed
2. Define the complete keyboard interaction pattern
3. Implement with native HTML where possible
4. Add comprehensive tests: axe-core + keyboard + screen reader assertions
5. Include forced-colors media query if component has custom visuals

Common Pitfalls: AI Tools and Accessibility

  1. ARIA soup. AI tools add ARIA attributes liberally without understanding the implications. A <div> with role="button", aria-label, aria-pressed, aria-expanded, and aria-haspopup is not accessible — it is confusing. Each ARIA attribute is a promise to assistive technology. Only add what you need, and implement the behavior each attribute promises.
  2. Alt text that describes the file, not the content. AI tools frequently generate alt text like “hero-banner.jpg” or “product image” instead of describing what the image actually shows. Alt text should convey the same information a sighted user gets. “Woman using laptop in coffee shop” is useful. “Image” is not.
  3. aria-label overriding visible text. If a button has visible text “Submit” and aria-label="Submit form data to server", screen reader users hear the aria-label, which diverges from what sighted users see. Speech input users who say “click Submit” may fail because the accessible name does not start with the visible text. Use aria-label only when there is no visible text, and use aria-describedby for supplemental information.
  4. Focus traps with no escape. AI-generated modals sometimes trap focus but forget the Escape key handler. The keyboard user is literally trapped. Always verify that Escape closes the component and returns focus.
  5. Live regions created dynamically. If you create an aria-live region and immediately inject content, the first announcement is lost because the screen reader had not registered the region yet. Always render the live region in the initial DOM and update its content later. AI tools consistently get this wrong.
  6. Testing only the happy path. AI-generated a11y tests usually render the component once and run axe-core. But many violations only appear in specific states: expanded dropdowns, error states, loading states, disabled states. Test every state, not just the default render.

Recommendations by Role

Role Recommended Stack Monthly Cost
Developer Adding A11y to Own Code Copilot Free — basic ARIA and semantic HTML guidance $0
Frontend Dev Assigned A11y Tasks Cursor Pro — codebase-aware fixes, .cursorrules for a11y conventions $20
Dedicated Accessibility Engineer Claude Code + Copilot Free — WCAG reasoning + inline completions $20
A11y Team Lead / Design System Claude Code + Cursor Pro — spec generation + codebase enforcement $40
Compliance / VPAT Author Claude Code — WCAG criterion mapping, audit documentation support $20
Agency / Freelance A11y Consultant Claude Code + Gemini CLI Free — audits + large-file contrast analysis $20

The Bottom Line

Accessibility engineering AI tooling in 2026 splits along the knowledge-execution axis:

  • WCAG knowledge is your bottleneck? Claude Code ($20/mo). It is the only tool that reasons through WCAG success criteria, explains why something fails, maps violations to specific criteria and levels, and generates complete WAI-ARIA APG patterns. It is like having a senior accessibility consultant available for every code decision.
  • Remediation speed is your bottleneck? Cursor Pro ($20/mo) with a comprehensive .cursorrules file. Multi-file refactoring for bulk fixes, codebase-aware pattern enforcement, and the ability to fix the root component rather than each instance.
  • Doing both? Claude Code + Copilot Free ($20/mo) for most accessibility engineers. Claude Code + Cursor Pro ($40/mo) for teams doing large-scale remediation or maintaining design system accessibility.
  • Budget-constrained? Copilot Free ($0) covers basic semantic HTML and ARIA. Gemini CLI Free adds large-context CSS auditing. Together, $0.

The biggest gap in AI tooling for accessibility engineers is screen reader behavior prediction. No AI tool today can reliably tell you exactly how VoiceOver, NVDA, and JAWS will announce a given piece of markup, because screen reader behavior is not standardized and varies between versions. Until AI tools can simulate assistive technology output, manual testing with real screen readers remains essential. The tools that bridge this gap — perhaps by integrating with screen reader testing APIs — will transform accessibility engineering.

Compare all tools and pricing on our main comparison table, read the hidden costs guide before committing to a paid plan, or check the enterprise guide if you need compliance and procurement details.

Related on CodeCosts