@ta11y/core
v1.3.2
Published
Core library for running web accessibility audits with ta11y.
Readme
@ta11y/core
Core library for running web accessibility audits with ta11y.
Install
npm install --save @ta11y/coreUsage
The easiest way to use this package is to use the CLI.
const { Ta11y } = require('@ta11y/core')
const ta11y = new Ta11y()
ta11y.audit('https://en.wikipedia.org')
.then((results) => {
console.log(JSON.stringify(results, null, 2))
})Alternatively, you can tell ta11y to crawl additional pages starting from the root page.
ta11y.audit('https://en.wikipedia.org', {
crawl: true,
maxDepth: 1,
maxVisit: 64
})If you want to crawl non-public pages, pass an instance of Puppeteer. This is useful for testing in development or behind corporate firewalls.
const puppeteer = require('puppeteer')
const browser = await puppeteer.launch()
ta11y.audit('http://localhost:3000', {
browser,
crawl: true,
maxDepth: 0
})You can also pass HTML directly to audit (whole pages or fragments).
ta11y.audit('<!doctype><html><body><h1>I ❤ accessibility</h1></body></html>')API Key
The free tier is subject to rate limits as well as a 60 second timeout, so if you're crawling a larger site, you're better off running content extraction locally.
If you're processing a non-publicly accessible website (like localhost), then you must perform content extraction locally.
You can bypass rate limiting by signing up for an API key and passing it either via the apiKey option of the Ta11y constructor or via the TA11Y_API_KEY environment variable.
const ta11y = new Ta11y({
apiKey: '<your-api-key>'
})Visit ta11y once you're ready to sign up for an API key.
API
Ta11y
Class to run web accessibility audits via the ta11y API.
Type: function (opts)
optsobject? Config options.
audit
Runs an accessibility audit against the given URL or raw HTML, optionally crawling the site to discover additional pages and auditing those too.
To audit local or private websites, pass an instance of Puppeteer as opts.browser.
The default behavior is to perform content extraction locally and auditing remotely. This works best for auditing publicly accessible websites.
Type: function (urlOrHtml, opts): Promise
urlOrHtmlstring URL or raw HTML to process.optsobject Config options.opts.suitesArray<string>? Optional array of audit suites to run. Possible values:-section508wcag2awcag2aawcag2aaabest-practicehtmlDefaults to running all audit suites.
opts.browserobject? Optional Puppeteer browser instance to use for auditing websites that aren't publicly reachable.opts.crawlboolean Whether or not to crawl additional pages. (optional, defaultfalse)opts.maxDepthnumber Maximum crawl depth while crawling. (optional, default16)opts.maxVisitnumber? Maximum number of pages to visit while crawling.opts.sameOriginboolean Whether or not to only consider crawling links with the same origin as the root URL. (optional, defaulttrue)opts.blacklistArray<string>? Optional blacklist of URL glob patterns to ignore.opts.whitelistArray<string>? Optional whitelist of URL glob patterns to only include.opts.gotoOptionsobject? Customize thePage.gotonavigation options.opts.viewportobject? Set the browser window's viewport dimensions and/or resolution.opts.userAgentstring? Set the browser's user-agent.opts.emulateDevicestring? Emulate a specific device type.- Use thenameproperty from one of the built-in devices.- Overrides
viewportanduserAgent.
- Overrides
opts.onNewPagefunction? Optional async function called every time a new page is initialized before proceeding with extraction.opts.filestring? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
extract
Extracts the content from a given URL or raw HTML, optionally crawling the site to discover additional pages and auditing those too.
To audit local or private websites, pass an instance of Puppeteer as opts.browser.
Type: function (urlOrHtml, opts): Promise
urlOrHtmlstring URL or raw HTML to process.optsobject Config options.opts.browserobject? Optional Puppeteer browser instance to use for auditing websites that aren't publicly reachable.opts.crawlboolean Whether or not to crawl additional pages. (optional, defaultfalse)opts.maxDepthnumber Maximum crawl depth while crawling. (optional, default16)opts.maxVisitnumber? Maximum number of pages to visit while crawling.opts.sameOriginboolean Whether or not to only consider crawling links with the same origin as the root URL. (optional, defaulttrue)opts.blacklistArray<string>? Optional blacklist of URL glob patterns to ignore.opts.whitelistArray<string>? Optional whitelist of URL glob patterns to only include.opts.gotoOptionsobject? Customize thePage.gotonavigation options.opts.viewportobject? Set the browser window's viewport dimensions and/or resolution.opts.userAgentstring? Set the browser's user-agent.opts.emulateDevicestring? Emulate a specific device type.- Use thenameproperty from one of the built-in devices.- Overrides
viewportanduserAgent.
- Overrides
opts.onNewPagefunction? Optional async function called every time a new page is initialized before proceeding with extraction.opts.filestring? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
auditExtractResults
Runs an accessibility audit against previously collected extraction results from
@ta11y/extract.
Type: function (extractResults, opts): Promise
extractResultsobject Extraction results conforming to the output format from@ta11y/extract.optsobject Config options. (optional, default{})opts.suitesArray<string>? Optional array of audit suites to run. Possible values:-section508wcag2awcag2aawcag2aaabest-practicehtmlDefaults to running all audits suites.
opts.filestring? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
License
MIT © Saasify
