npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

sparkecoder

v0.1.36

Published

A powerful coding agent CLI with HTTP API for development environments

Readme

🐶SparkECoder🐶

A powerful coding agent CLI with HTTP API. Built with the Vercel AI SDK.

Features

  • 🤖 Multi-Agent Sessions - Run multiple agents simultaneously with isolated contexts
  • 🔄 Streaming Responses - Real-time SSE streaming following Vercel AI SDK data stream protocol
  • 🔧 Powerful Tools - Bash execution, file operations, planning, and skill loading
  • Tool Approvals - Configurable approval requirements for dangerous operations
  • 📚 Skills System - Load specialized knowledge documents into context
  • 💾 SQLite Persistence - Full session and message history storage
  • 🌐 HTTP API - RESTful API with auto-generated OpenAPI specification via hono-openapi
  • 🎯 Context Management - Automatic summarization for long conversations

Installation

From GitHub Packages

# Configure npm to use GitHub Packages for @gostudyfetchgo scope
npm config set @gostudyfetchgo:registry https://npm.pkg.github.com

# Install the package (no auth required - it's public!)
npm install @gostudyfetchgo/sparkecoder

# Or with pnpm
pnpm add @gostudyfetchgo/sparkecoder

From Source

# Clone the repository
git clone https://github.com/gostudyfetchgo/sparkecoder.git
cd sparkecoder

# Install dependencies
pnpm install

# Set up environment variables
export AI_GATEWAY_API_KEY=your_api_key_here

# Start the server
pnpm dev

Global CLI Installation

npm install -g @gostudyfetchgo/sparkecoder
sparkecoder start

Quick Start

Initialize Configuration

sparkecoder init

This creates a sparkecoder.config.json file with default settings.

Start the Server

sparkecoder start

The server runs at http://localhost:3141 by default.

Make Your First Request

# Create a session and run a prompt
curl -X POST http://localhost:3141/agents/quick \
  -H "Content-Type: application/json" \
  -d '{"prompt": "List the files in the current directory"}'

API Reference

Sessions

| Endpoint | Method | Description | |----------|--------|-------------| | /sessions | GET | List all sessions | | /sessions | POST | Create a new session | | /sessions/:id | GET | Get session details | | /sessions/:id | DELETE | Delete a session | | /sessions/:id/messages | GET | Get session messages | | /sessions/:id/clear | POST | Clear session context |

Agents

| Endpoint | Method | Description | |----------|--------|-------------| | /agents/:id/run | POST | Run agent with streaming (SSE) | | /agents/:id/generate | POST | Run agent without streaming | | /agents/:id/approve/:toolCallId | POST | Approve pending tool | | /agents/:id/reject/:toolCallId | POST | Reject pending tool | | /agents/:id/approvals | GET | Get pending approvals | | /agents/quick | POST | Create session and run in one request |

OpenAPI

Full OpenAPI specification available at /openapi.json.

Configuration

Create a sparkecoder.config.json file:

{
  "defaultModel": "anthropic/claude-sonnet-4-20250514",
  "workingDirectory": ".",
  "toolApprovals": {
    "bash": true,
    "write_file": false,
    "read_file": false
  },
  "skills": {
    "directory": "./skills"
  },
  "context": {
    "maxChars": 200000,
    "autoSummarize": true
  },
  "server": {
    "port": 3141,
    "host": "127.0.0.1",
    "publicUrl": "http://your-server:3141"
  }
}

Configuration Options

| Option | Description | Default | |--------|-------------|---------| | defaultModel | Vercel AI Gateway model string | anthropic/claude-opus-4-5 | | workingDirectory | Base directory for file operations | Current directory | | toolApprovals | Which tools require user approval | { bash: true } | | skills.directory | Directory containing skill files | ./skills | | context.maxChars | Max context size before summarization | 200000 | | context.autoSummarize | Enable automatic summarization | true | | server.port | HTTP server port | 3141 | | server.host | HTTP server host | 127.0.0.1 | | server.publicUrl | Public URL for web UI (Docker/remote) | Auto-detected |

Tools

bash

Execute shell commands in the working directory.

{
  "command": "ls -la"
}

read_file

Read file contents with optional line range.

{
  "path": "src/index.ts",
  "startLine": 1,
  "endLine": 50
}

write_file

Write or edit files. Supports two modes:

Full write:

{
  "path": "new-file.ts",
  "mode": "full",
  "content": "// New file content"
}

String replacement:

{
  "path": "existing-file.ts",
  "mode": "str_replace",
  "old_string": "const x = 1;",
  "new_string": "const x = 2;"
}

todo

Manage task lists for complex operations.

{
  "action": "add",
  "items": [
    { "content": "Step 1: Analyze code" },
    { "content": "Step 2: Implement fix" }
  ]
}

load_skill

Load specialized knowledge into context.

{
  "action": "load",
  "skillName": "Debugging"
}

Skills

Skills are markdown files with specialized knowledge. Place them in your skills directory:

---
name: My Custom Skill
description: Description of what this skill provides
---

# My Custom Skill

Detailed content that will be loaded into context...

Built-in skills:

  • Debugging - Systematic debugging approaches
  • Code Review - Code review checklists and best practices
  • Refactoring - Safe refactoring patterns and techniques

CLI Commands

sparkecoder start     # Start the HTTP server
sparkecoder chat      # Interactive chat with the agent
sparkecoder init      # Create config file
sparkecoder sessions  # List all sessions
sparkecoder status    # Check if server is running
sparkecoder config    # Show current configuration
sparkecoder info      # Show version and environment

Interactive Chat

Start an interactive chat session with the agent:

# Start a new chat session
sparkecoder chat

# Resume an existing session
sparkecoder chat --session <session-id>

# Start with custom options
sparkecoder chat --name "My Project" --model "anthropic/claude-sonnet-4-20250514"

In-chat commands:

  • /quit or /exit - Exit the chat
  • /clear - Clear conversation history
  • /session - Show current session info
  • /tools - List available tools

Streaming Protocol

The API uses Server-Sent Events (SSE) following the Vercel AI SDK data stream protocol.

Compatible with useChat from @ai-sdk/react:

import { useChat } from '@ai-sdk/react';

const { messages, sendMessage } = useChat({
  api: 'http://localhost:3141/agents/SESSION_ID/run',
});

Tool Approvals

Configure which tools require approval:

{
  "toolApprovals": {
    "bash": true,      // Requires approval
    "write_file": true // Requires approval
  }
}

When approval is required:

  1. The agent pauses and streams an approval-required event
  2. Call /agents/:id/approve/:toolCallId to approve
  3. Call /agents/:id/reject/:toolCallId to reject

Environment Variables

| Variable | Description | |----------|-------------| | AI_GATEWAY_API_KEY | Vercel AI Gateway API key (required) | | SPARKECODER_MODEL | Override default model | | SPARKECODER_PORT | Override server port | | DATABASE_PATH | Override database path |

Docker / Remote Access

When running SparkECoder in Docker or exposing it to remote clients, you need to configure the public URL so the web UI can connect to the API from the browser.

CLI Option

sparkecoder start --public-url http://your-server:3141

Config File

{
  "server": {
    "port": 3141,
    "host": "0.0.0.0",
    "publicUrl": "http://your-server:3141"
  }
}

Notes:

  • Set host to 0.0.0.0 to bind to all interfaces (required for Docker/remote access)
  • Set publicUrl to the URL the browser will use to reach the API
  • The web UI detects this URL automatically on first load and stores it in localStorage

Development

# Run in development mode with hot reload
pnpm dev

# Type check
pnpm typecheck

# Build for production
pnpm build

# Run production build
pnpm start

Testing

SparkECoder includes comprehensive end-to-end tests that make actual API calls to the LLM.

# Run all E2E tests (requires AI_GATEWAY_API_KEY)
pnpm test:e2e

# Run tests in watch mode
pnpm test:watch

The tests cover:

  • Health & server endpoints
  • Session management (CRUD operations)
  • Agent text generation (streaming & non-streaming)
  • File operations (create, read, edit)
  • Bash command execution
  • Todo management
  • Multi-turn conversations with context
  • Tool approvals workflow

Note: E2E tests require a valid AI_GATEWAY_API_KEY and will make real LLM calls. They create a temporary .test-workspace directory that is cleaned up after tests complete.

License

Proprietary - All rights reserved.