@shellifyai/shell-tool
v0.1.2
Published
Node client and Vercel AI SDK tool for ShellifyAI's secure runtime API.
Downloads
372
Maintainers
Readme
@shellifyai/shell-tool
Lightweight Node client and Vercel AI SDK tool for ShellifyAI — secure, sandboxed shell execution for AI agents.
What is ShellifyAI?
ShellifyAI runs shell commands in isolated sandboxes so your AI agents can execute code safely. Instead of giving models direct access to your machine, commands run in ephemeral containers with:
- Security isolation — No access to host system
- Streaming output — Real-time stdout/stderr
- File artifacts — Created files uploaded with signed URLs
- Session persistence — Maintain state across commands
Install
npm install @shellifyai/shell-tool
# or
pnpm add @shellifyai/shell-toolPeer dependencies: ai (^5.0.0), zod (^3.23.0)
Quick Start (Vercel AI SDK)
The easiest integration — just add shellifyTool to your tools and the SDK handles execution automatically.
How it works: You provide a natural language prompt. The AI model decides when to run shell commands and generates the command parameter automatically. The shellifyTool executes it in a sandbox and returns the result to the model.
import { generateText, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { shellifyTool } from "@shellifyai/shell-tool";
const { text } = await generateText({
model: openai("gpt-5.1"),
prompt: "Create a Python file that prints Hello World and run it",
tools: {
// The model will call this tool with { command: "..." } when needed
shell: shellifyTool({
apiKey: process.env.SHELLIFYAI_API_KEY!,
adapterType: "local_shell", // Force bare sandbox (bypass managed adapters)
structuredResponse: true, // Always emit summary + structured_log with artifacts
}),
},
stopWhen: stepCountIs(5), // Allow up to 5 tool calls
});
console.log(text);The flow: prompt → model decides to use shell → model generates { command: "echo 'print(\"Hello World\")' > hello.py && python hello.py" } → shellifyTool executes → result back to model
Streaming Example (Next.js API Route)
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, stepCountIs } from "ai";
import { shellifyTool } from "@shellifyai/shell-tool";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-5.1"),
messages,
tools: {
shell: shellifyTool({
apiKey: process.env.SHELLIFYAI_API_KEY!,
}),
},
stopWhen: stepCountIs(5),
});
return result.toDataStreamResponse();
}Direct Client Usage
For non-Vercel AI SDK projects, use ShellifyClient directly:
import { ShellifyClient } from "@shellifyai/shell-tool";
const client = new ShellifyClient({
apiKey: process.env.SHELLIFYAI_API_KEY!,
});
// Execute and get result
const result = await client.execute({
tool: "local_shell",
payload: { command: "echo hello && ls -la" },
structuredResponse: true, // Always get stdout/stderr/exitCode/artifacts in summary + structured_log
});
console.log(result.summary.stdout);
console.log(result.summary.artifacts); // Any files createdStream Events
for await (const event of client.stream({
tool: "local_shell",
payload: { command: "pip install pandas && python script.py" },
})) {
if (event.type === "log") {
console.log(event.data); // Real-time output
} else if (event.type === "artifact") {
console.log("File created:", event.url);
}
}Configuration
shellifyTool Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| apiKey | string | process.env.SHELLIFYAI_API_KEY | Your project API key |
| baseUrl | string | https://shellifyai.com | API endpoint |
| adapterType | string | project default | Override adapter per run (e.g., local_shell for direct sandbox) |
| structuredResponse | boolean | true | Include structured summary + final structured_log event (artifacts/stdout/stderr/exitCode) |
| description | string | — | Override tool description for the model |
ShellifyClient Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| apiKey | string | — | Required. Project API key |
| baseUrl | string | https://shellifyai.com | API endpoint |
| fetchImpl | typeof fetch | globalThis.fetch | Custom fetch implementation |
Execute/Stream Options
| Option | Type | Description |
|--------|------|-------------|
| tool | string | Tool to invoke (default: local_shell) |
| payload.command | string | Required. Shell command to run |
| payload.intent | string | Context for what you're trying to do |
| payload.sessionId | string | Reuse session for file persistence |
| payload.workingDirectory | string | Working directory for command |
| payload.env | Record<string, string> | Environment variables |
| payload.timeoutMs | number | Timeout in ms (default: 120000) |
| payload.systemMessage | string | Custom system prompt |
| structuredResponse | boolean | Add structured summary + final structured_log event (default: true) |
| sandboxId | string | Target specific sandbox |
| signal | AbortSignal | Abort controller signal |
Response Format
interface ShellifyResult {
requestId: string;
adapter: string;
events: ShellifyEvent[];
summary: {
stdout: string;
stderr: string;
exitCode?: number;
status?: string;
sessionId?: string;
artifacts: Array<{
url?: string;
filename?: string;
contentType?: string;
}>;
};
}Streaming vs JSON: For production, prefer Accept: application/jsonl (NDJSON streaming) to capture logs and artifact events reliably. Default JSON works for simple scripts but can miss late events if you read before the stream flushes.
Event Types (Streaming)
| Type | Description |
|------|-------------|
| meta | Request metadata (requestId, adapter) |
| status | Execution status changes (running, completed, failed) |
| log | stdout/stderr output with stream field |
| artifact | File created with url and filename |
| structured_log | Final structured summary when structuredResponse is enabled (stdout, stderr, exitCode, artifacts) |
| error | Error message |
Changelog
- 0.1.2 — Default
structuredResponse: truefor client + tool, added streaming guidance and adapter override docs.
Environment Variables
# Required
SHELLIFYAI_API_KEY=your_api_key
# Optional overrides
SHELLIFYAI_BASE_URL=https://shellifyai.com
SHELLIFY_API_KEY=fallback_key # Legacy fallbackGet your API key from the ShellifyAI Dashboard.
Error Handling
try {
const result = await client.execute({
tool: "local_shell",
payload: { command: "some_command" },
});
if (result.summary.exitCode !== 0) {
console.error("Command failed:", result.summary.stderr);
}
} catch (error) {
// Network or API errors
console.error("API error:", error.message);
}TypeScript
Full TypeScript support with exported types:
import type {
ShellifyClient,
ShellifyClientConfig,
ShellifyResult,
ShellifyEvent,
ShellifySummary,
Artifact,
AdapterType,
ExecuteOptions,
ShellifyToolOptions,
} from "@shellifyai/shell-tool";Links
Changelog
- 0.1.1 — Add
structuredResponsesupport (client + tool), pass through adapter overrides, document structured summaries/structured_log events.
License
MIT
