axios-robots
v0.4.0
Published
A lightweight Axios interceptor that enforces robots.txt compliance for web scrapers and bots
Maintainers
Readme
Axios Robots Exclusion Protocol 🤖
A lightweight, robust Axios request interceptor that automatically enforces the strict Robots Exclusion Protocol (robots.txt) for your web scrapers and bots.
Ensures your bot plays by the rules defined by website owners, preventing unauthorized access and potential bans.
Features
- 🚀 Automated Compliance: Validates every request against
robots.txtrules (cached per origin). - ⏱️ Crawl-Delay: Option to automatically wait before requests if
Crawl-delayis specified. - 🛡️ Strict Mode: invalid URLs, non-HTTP/S protocols, or unreachable
robots.txtfiles (non-4xx error) block requests by default. - ✨ Clean Architecture: built with maintainability and separation of concerns in mind.
- 🔌 Plug-and-Play: easily attaches to any Axios instance.
- 📦 Lightweight: minimal dependencies (
robots-parser).
Installation
npm install axios-robots
# or
yarn add axios-robots
# or
pnpm add axios-robots
# or
bun add axios-robotsUsage
Basic Setup
Import the applyRobotsInterceptor function and attach it to your Axios instance. You must provide a userAgent that identifies your bot.
import axios from 'axios';
import { applyRobotsInterceptor } from 'axios-robots';
const client = axios.create();
// Apply the interceptor
applyRobotsInterceptor(client, {
userAgent: 'MyCoolBot/1.0',
});
async function crawl() {
try {
// 1. Valid request (if allowed by robots.txt)
const response = await client.get('https://www.google.com/');
console.log('Data:', response.data);
// 2. Blocked request (e.g. Google disallows /search)
await client.get('https://www.google.com/search?q=axios-robots');
} catch (error: any) {
if (error.name === 'RobotsError') {
console.error('⛔ Access denied by robots.txt:', error.message);
} else {
console.error('Network or other error:', error);
}
}
}
crawl();API Reference
applyRobotsInterceptor(axiosInstance, options)
Attaches the interceptor to the provided Axios instance.
- axiosInstance:
AxiosInstance- The instance to modify. - options:
RobotsPluginOptions- Configuration object.
RobotsPluginOptions
interface RobotsPluginOptions {
userAgent: string;
crawlDelayCompliance?: CrawlDelayComplianceMode; // default: CrawlDelayComplianceMode.Await
cachingPolicy?: CachingPolicy; // default: Indefinite (caches forever)
}
enum CrawlDelayComplianceMode {
Await = 'await', // Respects delay by waiting
Ignore = 'ignore', // Ignores delay
Failure = 'failure' // Throws Error if delay is not met
}CachingPolicy
You can configure how long robots.txt is cached.
import { CachingPolicyType } from 'axios-robots';
// Option 1: Indefinite Caching (Default)
const indefinite = {
type: CachingPolicyType.Indefinite
};
// Option 2: Time-based Expiration
const timeBased = {
type: CachingPolicyType.ExpireAfter,
duration: '1h' // Supports strings ('5m', '1d', '200ms') or numbers (milliseconds)
};
// Option 3: Request-based Expiration
const requestBased = {
type: CachingPolicyType.RequestCount,
maxRequests: 10 // Expire after 10 requests
};Error Handling
The interceptor throws a RobotsError in the following cases:
- Blocked by Rules: The URL is disallowed by
robots.txtfor your User-Agent. - Invalid URL: The request URL cannot be parsed.
- Invalid Protocol: The protocol is not
http:orhttps:. - Unreachable robots.txt: The
robots.txtfile could not be fetched (and did not return a 4xx status).
Note: If robots.txt returns a Client Error (4xx) (e.g. 404 Not Found, 403 Forbidden), the library assumes Allow All (per RFC 9309).
How It Works
- Interception: Intercepts every HTTP/S request made by the Axios instance.
- Fetch: Automatically fetches the
robots.txtfrom the request's origin (e.g.,https://example.com/robots.txt) using your configured User-Agent. - Cache: Caches the parsed
robots.txtrules in memory to prevent redundant requests. - Validate: Checks if the target URL is allowed.
- Proceed or Block:
- If Allowed: The request proceeds normally.
- If Disallowed (or error): The request is cancelled immediately with a
RobotsError.
Compliance & Roadmap
✅ Implemented
- [x] RFC 9309 Compliance: Full support for the standard Robots Exclusion Protocol.
- [x] Standard Directives: Supports
User-agent,Allow, andDisallow. - [x] Wildcards: Supports standard path matching including
*and$. - [x] Crawl-delay: The interceptor enforces
Crawl-delaydirectives (automatic throttling) if configured. - [x] Cache TTL: Flexible caching policies (indefinite or expiration-based).
🚧 Roadmap
- [ ] Sitemap: Does not currently expose or parse
Sitemapdirectives for the consumer.
Contributing
We love contributions! Please read our CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests.
If you're looking for a place to start, check out the Roadmap.
License
BSD 3-Clause
