btca-server
v1.0.70
Published
BTCA server for answering questions about your codebase using OpenCode AI
Maintainers
Readme
btca-server
BTCA (Better Context AI) server for answering questions about your codebase using OpenCode AI.
Installation
bun add btca-serverUsage
Starting the Server
import { startServer } from 'btca-server';
// Start with default options (port 8080 or process.env.PORT)
const server = await startServer();
console.log(`Server running at ${server.url}`);
// Start with custom port
const server = await startServer({ port: 3000 });
// Start with quiet mode (no logging)
const server = await startServer({ port: 3000, quiet: true });
// Stop the server when needed
server.stop();Server Instance
The startServer function returns a ServerInstance object:
interface ServerInstance {
port: number; // Actual port the server is running on
url: string; // Full URL (e.g., "http://localhost:8080")
stop: () => void; // Function to stop the server
}Random Port Assignment
You can pass port: 0 to let the OS assign a random available port:
const server = await startServer({ port: 0 });
console.log(`Server running on port ${server.port}`);API Endpoints
Once the server is running, it exposes the following REST API endpoints:
Health Check
GET /Returns service status and version info.
Configuration
GET /configReturns current configuration (provider, model, resources).
Resources
GET /resourcesLists all configured resources (local directories or git repositories).
POST /config/resourcesAdd a new resource (git or local).
DELETE /config/resourcesRemove a resource by name.
POST /clearClear all locally cloned resources.
Questions
POST /questionAsk a question (non-streaming response).
POST /question/streamAsk a question with streaming SSE response.
OpenCode Instance
POST /opencodeGet an OpenCode instance URL for a collection of resources.
Model Configuration
PUT /config/modelUpdate the AI provider and model configuration.
Configuration
The server reads configuration from ~/.btca/config.toml or your local project's .btca/config.toml. You'll need to configure:
- AI Provider: OpenCode AI provider (e.g., "anthropic")
- Model: AI model to use (e.g., "claude-3-7-sonnet-20250219")
- Resources: Local directories or git repositories to query
Example config.toml:
provider = "anthropic"
model = "claude-3-7-sonnet-20250219"
resourcesDirectory = "~/.btca/resources"
[[resources]]
type = "local"
name = "my-project"
path = "/path/to/my/project"
[[resources]]
type = "git"
name = "some-repo"
url = "https://github.com/user/repo"
branch = "main"Environment Variables
PORT: Server port (default: 8080)OPENCODE_API_KEY: OpenCode AI API key (required)
TypeScript Types
The package exports TypeScript types for use with Hono RPC client:
import type { AppType } from 'btca-server';
import { hc } from 'hono/client';
const client = hc<AppType>('http://localhost:8080');Stream Types
For working with SSE streaming responses:
import type { BtcaStreamEvent, BtcaStreamMetaEvent } from 'btca-server/stream/types';Requirements
- Bun: >= 1.1.0 (this package is designed specifically for Bun runtime)
- OpenCode AI API Key: Required for AI functionality
License
MIT
