fastly-mcp-server
v2.0.2
Published
MCP server that gives AI agents full access to the Fastly API - search, inspect, and execute 584 API methods
Maintainers
Readme
Fastly MCP server v2
An MCP server that gives AI agents full access to the Fastly API.
This is a complete rewrite of the original Fastly MCP server (which was written in Go and wrapped the CLI). That approach produced a massive tool manifest that ate context windows and still couldn't cover the full API surface.
This version takes a different approach: three small tools (search, inspect, and execute) that let agents discover any of the 583 API methods across 133 API classes, read their full documentation, and compose real JavaScript calls against the official Fastly client.
Agents search the API index, inspect method signatures, and run code directly, with no wrapper layer in between.
Why this is fast
Traditional MCP servers expose one tool per API method. An agent that needs to, say, list all backends across three services has to make three separate tool calls, wait for each response, interpret the results, and stitch them together. Every round trip burns tokens and adds latency.
With execute, the agent writes a short JavaScript snippet that does all of that in one call:
const api = new Fastly.BackendApi();
const services = ['svc1', 'svc2', 'svc3'];
const results = await Promise.all(
services.map(id => api.listBackends({ service_id: id, version_id: 1 }))
);
return results.flat();One round trip, one result. The agent composes API calls using normal language features (loops, Promise.all, destructuring, filtering) instead of orchestrating them through repeated tool invocations. Complex queries that used to take five or six back-and-forth turns collapse into a single execute call.
Install
npm install -g fastly-mcp-server
# or
bunx fastly-mcp-serverRequires Bun and a Fastly API token.
Configuration
Add this to your MCP client configuration:
{
"mcpServers": {
"fastly": {
"command": "bunx",
"args": ["fastly-mcp-server"],
"env": {
"FASTLY_API_TOKEN": "your-token-here"
}
}
}
}Or if you installed globally:
{
"mcpServers": {
"fastly": {
"command": "fastly-mcp-server",
"env": {
"FASTLY_API_TOKEN": "your-token-here"
}
}
}
}Tools
search(query)
Queries an in-memory index of 583 API methods across 133 API classes, built at startup from the bundled Fastly JS client documentation. Returns the top 10 matches as structured JSON.
Search by method name, API class, HTTP path, or keyword:
search("purge")
search("PurgeApi")
search("createBackend")
search("/service/{service_id}/purge")Scoring: exact method name (100) > partial method name (50) > API class (30) > HTTP path (20) > description (10) > return type (5) / parameter name (5). Multi-token queries get a bonus when every token matches.
inspect(method)
Returns full documentation for a specific API method, including parameters, return type, and example code. Use this after search to understand how to call a method and what it returns.
Accepts a method name or ClassName.methodName:
inspect("listServices")
inspect("ServiceApi.listServices")execute(code)
Runs JavaScript in a short-lived Bun subprocess with the fastly client library pre-loaded and authenticated. The code runs as an async function body; use return to get results back.
const api = new Fastly.ServiceApi();
return await api.listServices();console.log, console.warn, and console.error output is captured and returned alongside the result in a console array, useful for debugging.
Large results are automatically summarized: arrays over 10 items are truncated to the first 10 with a count, and objects with more than 30 keys show a preview. This keeps tool responses compact without requiring the agent to paginate manually.
Resource limits: 30-second timeout, 100 KB stdout cap.
How it works
At startup the server parses every *Api.md doc bundled in the package to build a searchable index of all available API methods, parameters, HTTP paths, and descriptions.
When an agent calls execute, the server spawns a Bun subprocess that has the Fastly client pre-configured with your API token. The subprocess runs the provided code, captures any console output, serializes the result (handling circular references, BigInt, Buffers, etc.), and returns it as JSON.
Secret encryption
API responses can contain sensitive tokens (API keys, credentials, etc.). By default these flow through to the LLM provider in plain text. You can enable transparent secret encryption so the model never sees real credential values.
When enabled, the server uses format-preserving encryption (fast-cipher) to replace recognized tokens with encrypted equivalents in tool outputs. Encrypted tokens look realistic and stay consistent across turns, so the agent can reference and pass them back. They're decrypted transparently on the way in before any tool executes.
Built-in patterns cover GitHub PATs, OpenAI keys, Anthropic keys, AWS keys, Stripe, Slack, Fastly tokens, and many more.
Enabling encryption
Via CLI flags:
fastly-mcp-server --encrypt-secrets
# or with a fixed key (32 hex chars = 16 bytes):
fastly-mcp-server --encrypt-secrets --encrypt-key 0102030405060708090a0b0c0d0e0f10Via environment variables:
{
"mcpServers": {
"fastly": {
"command": "fastly-mcp-server",
"env": {
"FASTLY_API_TOKEN": "your-token-here",
"FASTLY_MCP_ENCRYPT_SECRETS": "true"
}
}
}
}Configuration
| Setting | CLI flag | Env var | Default | Description |
| ----------------- | --------------------- | ---------------------------- | ------- | ----------------------------------------------------------------------------------------- |
| Enable encryption | --encrypt-secrets | FASTLY_MCP_ENCRYPT_SECRETS | false | Turn on secret encryption |
| Encryption key | --encrypt-key <hex> | FASTLY_MCP_ENCRYPT_KEY | random | 32 hex characters (16 bytes). If unset, an ephemeral random key is generated per session. |
| Tweak | -- | FASTLY_MCP_ENCRYPT_TWEAK | none | Optional per-context tweak string for domain separation |
CLI flags take precedence over environment variables.
Scope and limitations
- Encryption is per-session. When the server restarts, the key (if ephemeral) and the in-memory registry are lost. There is no cross-restart decryption.
- Only tokens matching known patterns are encrypted. Unrecognized credential formats pass through unchanged.
- Only text content is scanned. Binary data and images are not affected.
- The
FASTLY_API_TOKENitself never appears in tool results (it only lives inprocess.envinside the sandbox), but other tokens discovered in API responses are protected.
