sparkecoder
v0.1.36
Published
A powerful coding agent CLI with HTTP API for development environments
Maintainers
Readme
🐶SparkECoder🐶
A powerful coding agent CLI with HTTP API. Built with the Vercel AI SDK.
Features
- 🤖 Multi-Agent Sessions - Run multiple agents simultaneously with isolated contexts
- 🔄 Streaming Responses - Real-time SSE streaming following Vercel AI SDK data stream protocol
- 🔧 Powerful Tools - Bash execution, file operations, planning, and skill loading
- ✅ Tool Approvals - Configurable approval requirements for dangerous operations
- 📚 Skills System - Load specialized knowledge documents into context
- 💾 SQLite Persistence - Full session and message history storage
- 🌐 HTTP API - RESTful API with auto-generated OpenAPI specification via hono-openapi
- 🎯 Context Management - Automatic summarization for long conversations
Installation
From GitHub Packages
# Configure npm to use GitHub Packages for @gostudyfetchgo scope
npm config set @gostudyfetchgo:registry https://npm.pkg.github.com
# Install the package (no auth required - it's public!)
npm install @gostudyfetchgo/sparkecoder
# Or with pnpm
pnpm add @gostudyfetchgo/sparkecoderFrom Source
# Clone the repository
git clone https://github.com/gostudyfetchgo/sparkecoder.git
cd sparkecoder
# Install dependencies
pnpm install
# Set up environment variables
export AI_GATEWAY_API_KEY=your_api_key_here
# Start the server
pnpm devGlobal CLI Installation
npm install -g @gostudyfetchgo/sparkecoder
sparkecoder startQuick Start
Initialize Configuration
sparkecoder initThis creates a sparkecoder.config.json file with default settings.
Start the Server
sparkecoder startThe server runs at http://localhost:3141 by default.
Make Your First Request
# Create a session and run a prompt
curl -X POST http://localhost:3141/agents/quick \
-H "Content-Type: application/json" \
-d '{"prompt": "List the files in the current directory"}'API Reference
Sessions
| Endpoint | Method | Description |
|----------|--------|-------------|
| /sessions | GET | List all sessions |
| /sessions | POST | Create a new session |
| /sessions/:id | GET | Get session details |
| /sessions/:id | DELETE | Delete a session |
| /sessions/:id/messages | GET | Get session messages |
| /sessions/:id/clear | POST | Clear session context |
Agents
| Endpoint | Method | Description |
|----------|--------|-------------|
| /agents/:id/run | POST | Run agent with streaming (SSE) |
| /agents/:id/generate | POST | Run agent without streaming |
| /agents/:id/approve/:toolCallId | POST | Approve pending tool |
| /agents/:id/reject/:toolCallId | POST | Reject pending tool |
| /agents/:id/approvals | GET | Get pending approvals |
| /agents/quick | POST | Create session and run in one request |
OpenAPI
Full OpenAPI specification available at /openapi.json.
Configuration
Create a sparkecoder.config.json file:
{
"defaultModel": "anthropic/claude-sonnet-4-20250514",
"workingDirectory": ".",
"toolApprovals": {
"bash": true,
"write_file": false,
"read_file": false
},
"skills": {
"directory": "./skills"
},
"context": {
"maxChars": 200000,
"autoSummarize": true
},
"server": {
"port": 3141,
"host": "127.0.0.1",
"publicUrl": "http://your-server:3141"
}
}Configuration Options
| Option | Description | Default |
|--------|-------------|---------|
| defaultModel | Vercel AI Gateway model string | anthropic/claude-opus-4-5 |
| workingDirectory | Base directory for file operations | Current directory |
| toolApprovals | Which tools require user approval | { bash: true } |
| skills.directory | Directory containing skill files | ./skills |
| context.maxChars | Max context size before summarization | 200000 |
| context.autoSummarize | Enable automatic summarization | true |
| server.port | HTTP server port | 3141 |
| server.host | HTTP server host | 127.0.0.1 |
| server.publicUrl | Public URL for web UI (Docker/remote) | Auto-detected |
Tools
bash
Execute shell commands in the working directory.
{
"command": "ls -la"
}read_file
Read file contents with optional line range.
{
"path": "src/index.ts",
"startLine": 1,
"endLine": 50
}write_file
Write or edit files. Supports two modes:
Full write:
{
"path": "new-file.ts",
"mode": "full",
"content": "// New file content"
}String replacement:
{
"path": "existing-file.ts",
"mode": "str_replace",
"old_string": "const x = 1;",
"new_string": "const x = 2;"
}todo
Manage task lists for complex operations.
{
"action": "add",
"items": [
{ "content": "Step 1: Analyze code" },
{ "content": "Step 2: Implement fix" }
]
}load_skill
Load specialized knowledge into context.
{
"action": "load",
"skillName": "Debugging"
}Skills
Skills are markdown files with specialized knowledge. Place them in your skills directory:
---
name: My Custom Skill
description: Description of what this skill provides
---
# My Custom Skill
Detailed content that will be loaded into context...Built-in skills:
- Debugging - Systematic debugging approaches
- Code Review - Code review checklists and best practices
- Refactoring - Safe refactoring patterns and techniques
CLI Commands
sparkecoder start # Start the HTTP server
sparkecoder chat # Interactive chat with the agent
sparkecoder init # Create config file
sparkecoder sessions # List all sessions
sparkecoder status # Check if server is running
sparkecoder config # Show current configuration
sparkecoder info # Show version and environmentInteractive Chat
Start an interactive chat session with the agent:
# Start a new chat session
sparkecoder chat
# Resume an existing session
sparkecoder chat --session <session-id>
# Start with custom options
sparkecoder chat --name "My Project" --model "anthropic/claude-sonnet-4-20250514"In-chat commands:
/quitor/exit- Exit the chat/clear- Clear conversation history/session- Show current session info/tools- List available tools
Streaming Protocol
The API uses Server-Sent Events (SSE) following the Vercel AI SDK data stream protocol.
Compatible with useChat from @ai-sdk/react:
import { useChat } from '@ai-sdk/react';
const { messages, sendMessage } = useChat({
api: 'http://localhost:3141/agents/SESSION_ID/run',
});Tool Approvals
Configure which tools require approval:
{
"toolApprovals": {
"bash": true, // Requires approval
"write_file": true // Requires approval
}
}When approval is required:
- The agent pauses and streams an
approval-requiredevent - Call
/agents/:id/approve/:toolCallIdto approve - Call
/agents/:id/reject/:toolCallIdto reject
Environment Variables
| Variable | Description |
|----------|-------------|
| AI_GATEWAY_API_KEY | Vercel AI Gateway API key (required) |
| SPARKECODER_MODEL | Override default model |
| SPARKECODER_PORT | Override server port |
| DATABASE_PATH | Override database path |
Docker / Remote Access
When running SparkECoder in Docker or exposing it to remote clients, you need to configure the public URL so the web UI can connect to the API from the browser.
CLI Option
sparkecoder start --public-url http://your-server:3141Config File
{
"server": {
"port": 3141,
"host": "0.0.0.0",
"publicUrl": "http://your-server:3141"
}
}Notes:
- Set
hostto0.0.0.0to bind to all interfaces (required for Docker/remote access) - Set
publicUrlto the URL the browser will use to reach the API - The web UI detects this URL automatically on first load and stores it in localStorage
Development
# Run in development mode with hot reload
pnpm dev
# Type check
pnpm typecheck
# Build for production
pnpm build
# Run production build
pnpm startTesting
SparkECoder includes comprehensive end-to-end tests that make actual API calls to the LLM.
# Run all E2E tests (requires AI_GATEWAY_API_KEY)
pnpm test:e2e
# Run tests in watch mode
pnpm test:watchThe tests cover:
- Health & server endpoints
- Session management (CRUD operations)
- Agent text generation (streaming & non-streaming)
- File operations (create, read, edit)
- Bash command execution
- Todo management
- Multi-turn conversations with context
- Tool approvals workflow
Note: E2E tests require a valid AI_GATEWAY_API_KEY and will make real LLM calls. They create a temporary .test-workspace directory that is cleaned up after tests complete.
License
Proprietary - All rights reserved.
