@weaaare/mcp-a11y-readability
v0.1.1
Published
MCP server for text readability analysis — Spanish and English readability formulas, WCAG 3.1.5 compliance, and plain language assessment
Maintainers
Readme
@weaaare/mcp-a11y-readability
MCP (Model Context Protocol) server for text readability analysis. Gives AI coding agents the ability to evaluate reading difficulty in Spanish and English using scientifically validated formulas, aligned with WCAG accessibility standards.
Can help cover WCAG 2.2 SC 3.1.5 — Reading Level (AAA) — see the criterion, sufficient techniques, and how AI agents use this server below.
Tools
| Tool | Description |
| --- | --- |
| analyze-readability | Run all formulas for a language, get a consensus difficulty level |
| analyze-readability-formula | Run a single named formula on a text |
| get-text-stats | Get word, sentence, syllable, and character statistics |
| list-formulas | List all available formulas (optionally filtered by language) |
| suggest-readability-improvements | Get actionable suggestions to simplify text |
| compare-texts | Compare readability between two texts (e.g. before/after) |
Supported formulas
Spanish (7)
| Formula | Author(s) | Year | Output | | --- | --- | --- | --- | | Fernández Huerta | Fernández Huerta, J. | 1959 | Ease score (0–100) | | Szigriszt-Pazos (Perspicuidad) | Szigriszt Pazos, F. | 1993 | Ease score (0–100) | | INFLESZ | Barrio-Cantalejo, I.M. et al. | 2008 | Ease score (healthcare scale) | | Gutiérrez de Polini | Gutiérrez de Polini, L.E. | 1972 | Ease score (0–100) | | Crawford | Crawford, A.N. | 1989 | Grade level (primary school) | | Legibilidad µ (mu) | Muñoz Baquedano, M. & Muñoz Urra, J. | 2006 | Ease score (0–∞) | | García López | García López, J.A. & Arcos, A. | 1999 | Minimum age |
English (6)
| Formula | Author(s) | Year | Output | | --- | --- | --- | --- | | Flesch Reading Ease | Flesch, R. | 1948 | Ease score (0–100) | | Flesch-Kincaid Grade Level | Kincaid, J.P. et al. | 1975 | U.S. grade level | | Gunning Fog Index | Gunning, R. | 1952 | Years of education | | SMOG Index | McLaughlin, G.H. | 1969 | U.S. grade level | | Coleman-Liau Index | Coleman, M. & Liau, T. | 1975 | U.S. grade level | | Automated Readability Index | Smith, E.A. & Senter, R.J. | 1967 | U.S. grade level |
Getting started
Standard config works in most MCP clients:
{
"mcpServers": {
"a11y-readability": {
"command": "npx",
"args": ["-y", "@weaaare/mcp-a11y-readability"]
}
}
}Add to your project's .vscode/mcp.json (or user-level settings.json under "mcp"):
{
"servers": {
"a11y-readability": {
"command": "npx",
"args": ["-y", "@weaaare/mcp-a11y-readability"]
}
}
}Or install via the VS Code CLI:
code --add-mcp '{"name":"a11y-readability","command":"npx","args":["-y","@weaaare/mcp-a11y-readability"]}'Follow the MCP install guide. Add to your claude_desktop_config.json using the standard config above.
claude mcp add a11y-readability npx -y @weaaare/mcp-a11y-readabilityGo to Cursor Settings → MCP → Add new MCP Server. Use command type with npx -y @weaaare/mcp-a11y-readability.
Or add to .cursor/mcp.json using the standard config above.
Follow Windsurf MCP documentation. Use the standard config above.
Add to your cline_mcp_settings.json:
{
"mcpServers": {
"a11y-readability": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@weaaare/mcp-a11y-readability"],
"disabled": false
}
}
}Follow the MCP Servers documentation. Add to .kiro/settings/mcp.json using the standard config above.
codex mcp add a11y-readability npx "-y" "@weaaare/mcp-a11y-readability"Or edit ~/.codex/config.toml:
[mcp_servers.a11y-readability]
command = "npx"
args = ["-y", "@weaaare/mcp-a11y-readability"]Go to Advanced settings → Extensions → Add custom extension. Use type STDIO and set the command to npx -y @weaaare/mcp-a11y-readability.
Go to Settings → AI → Manage MCP Servers → + Add. Use the standard config above.
Follow the MCP install guide. Use the standard config above.
WCAG 2.2 SC 3.1.5 — Reading Level (AAA)
The criterion
When text requires reading ability more advanced than the lower secondary education level after removal of proper names and titles, supplemental content, or a version that does not require reading ability more advanced than the lower secondary education level, is available.
The criterion belongs to Principle 3 — Understandable, under Guideline 3.1 — Readable. It is the only WCAG success criterion that directly addresses the cognitive complexity of written content.
Lower secondary education level corresponds to 7–9 years of formal schooling (roughly ages 12–15), as defined by the International Standard Classification of Education (UNESCO). Text that exceeds this threshold creates barriers for people with reading disabilities (such as dyslexia), cognitive disabilities, and non-native speakers — even when those users are otherwise highly educated.
The criterion is classified as Level AAA — the highest conformance tier — because it is not always possible to simplify every piece of content (legal texts, scientific papers, technical documentation). However, when the text can be simplified or supplemented, doing so dramatically improves comprehension for a broad range of users.
Sufficient techniques
WCAG defines five sufficient techniques for meeting SC 3.1.5. Any one of them (or a combination) is considered valid:
| Technique | ID | Summary | Testable with readability formulas? | | --- | --- | --- | --- | | Provide a text summary at lower secondary level | G86 | Write a short plain-language summary alongside the complex content. Measure its readability to confirm it is below the threshold. | Yes — measure the summary | | Make the text itself easier to read | G153 | Shorten sentences, replace jargon, use active voice, use lists, limit conjunctions to two per sentence, one topic per paragraph. Measure the result. | Yes — measure the rewritten text | | Provide visual illustrations | G103 | Add charts, diagrams, photographs, or graphic organizers that explain the same ideas as the text. | No — visual check | | Provide a spoken version | G79 | Offer a recorded or synthesized audio version of the text. | No — audio check | | Provide a sign language version | G160 | Include a sign language video that conveys the same information as the text. | No — video check |
Of these five techniques, G86 and G153 require measuring readability — and both state in their test procedure: "Measure the readability of the text. Check that the text requires reading ability less advanced than the lower secondary education level." This is exactly what mcp-a11y-readability does.
How this MCP server helps AI agents
Traditional accessibility auditing tools can check contrast ratios, missing alt attributes, or ARIA roles automatically. But reading level has always been a blind spot for automated tools — it requires linguistic analysis that depends on the language of the text and the choice of readability formula.
By exposing readability analysis through the Model Context Protocol, this server gives AI coding agents the ability to reason about and verify text complexity as part of their workflow:
Detect the problem — An agent can use
analyze-readabilityto check whether a piece of content (a landing page, a help article, a legal notice) exceeds the lower secondary education threshold according to validated formulas.Apply technique G153 — If the text is too complex, the agent can rewrite it using simpler language, then call
compare-textsto verify the rewritten version scores below the threshold — a measurable before/after comparison.Apply technique G86 — If the original cannot be simplified (e.g. a legal contract), the agent can generate a plain-language summary and use
analyze-readabilityto confirm the summary meets the required level.Choose the right formulas for the language — The agent can call
list-formulasto discover which formulas are available for the content's language (7 for Spanish, 6 for English), and useanalyze-readability-formulato run a specific one if there is a domain preference (e.g. INFLESZ for healthcare texts in Spanish, SMOG for healthcare in English).Get actionable suggestions —
suggest-readability-improvementsreturns concrete guidance: which metrics are out of range (words per sentence, syllables per word, etc.) and what to change.
Without this MCP server, an AI agent has no way to measure whether its rewritten text actually meets the reading level requirement — it can only guess. With it, the agent can close the loop: analyze → rewrite → measure → verify.
Acknowledgements
Readability formulas
Spanish
- Fernández Huerta, J. (1959). Medidas sencillas de lecturabilidad. Consigna, 214, 29–32. Adapted from Flesch's formula for Spanish. Coefficients corrected by Gwillim Law (2011) as documented on legible.es.
- Szigriszt Pazos, F. (1993). Sistemas predictivos de legibilidad del mensaje escrito: fórmula de perspicuidad. Doctoral thesis, Universidad Complutense de Madrid. Reference: legible.es.
- Barrio-Cantalejo, I.M. et al. (2008). Validación de la Escala INFLESZ para evaluar la legibilidad de los textos dirigidos a pacientes. Anales del Sistema Sanitario de Navarra, 31(2), 135–152. Healthcare-oriented interpretation scale for the Szigriszt-Pazos formula. Reference: legible.es.
- Gutiérrez de Polini, L.E. (1972). Investigación sobre lectura en Venezuela. First readability formula designed natively for Spanish (not adapted from English). Reference: legible.es.
- Crawford, A.N. (1989). Fórmula y gráfico para determinar la comprensibilidad de textos de nivel primario en castellano. Lectura y Vida, 10(4). Grade-level formula for primary school texts. Reference: legible.es.
- Muñoz Baquedano, M. & Muñoz Urra, J. (2006). Legibilidad Mμ. Viña del Mar, Chile. Uses word-length variance instead of syllable counting. Reference: legible.es.
- García López, J.A. & Arcos, A. (1999). Medida de la legibilidad del material escrito. Pharm Care Esp, 1(6), 412–419. Returns the minimum age required to understand a text. Reference: legible.es.
English
- Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233. The most widely used readability formula worldwide.
- Kincaid, J.P. et al. (1975). Derivation of new readability formulas for Navy enlisted personnel. CNTECHTRA Research Branch Report 8-75. U.S. military standard for document readability.
- Gunning, R. (1952). The Technique of Clear Writing. McGraw-Hill. Estimates years of formal education needed to understand the text.
- McLaughlin, G.H. (1969). SMOG grading — a new readability formula. Journal of Reading, 22, 639–646. Recommended for healthcare materials.
- Coleman, M. & Liau, T. (1975). A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2), 283–284. Uses character count instead of syllable count.
- Smith, E.A. & Senter, R.J. (1967). Automated Readability Index. AMRL-TR-66-220. Designed for automated processing without syllable counting.
References and standards
- legible.es — Comprehensive reference for Spanish readability formulas. Created by Alejandro Muñoz Fernández. All Spanish formulas in this package reference their documentation. Content licensed under CC BY-NC-SA 4.0.
- W3C — WCAG 2.2 SC 3.1.5 Reading Level. W3C content is used under the W3C Software and Document License.
Libraries
- silabajs (v2.1.0) — Spanish syllable splitter by Nicolás Cofré Méndez. Zero dependencies, TypeScript, MIT license.
- syllable (v5.0.1) — English syllable counter by Titus Wormer (wooorm). Part of the unified ecosystem, MIT license.
License
MIT — weAAAre
