@enhance-eng/operator
v0.1.0
Published
Browser-based LLM agent runtime - run AI agents entirely client-side with tool execution
Readme
@opencode/browser
OpenCode for the browser — Run an AI coding assistant entirely in your web application.
Unlike traditional AI chat widgets that are just thin wrappers around an API, @opencode/browser runs the full OpenCode engine client-side. The AI can actually do things in your app by calling tools you define, not just chat.
Key Concept: The AI Does Things Via Tools
This is not a puppeteer-style automation where the AI clicks buttons and types into fields. Instead:
- You define tools that expose your app's functionality (e.g.,
addTodo,createUser,sendEmail) - The AI calls these tools to perform actions
- The AI can read the DOM to understand context
- The AI can guide users by highlighting UI elements
Think of it like giving the AI an API to your app, not a mouse and keyboard.
Table of Contents
- Quick Start
- Architecture Overview
- Setting Up Your Backend
- Defining Tools
- Defining Skills
- React Integration
- Complete Example
- API Reference
Quick Start
npm install @opencode/browserimport { OpenCodeProvider, useChat } from "@opencode/browser/react";
import { defineTool, createLLMProvider, InMemoryStorageAdapter } from "@opencode/browser";
import { z } from "zod";
// 1. Define tools that the AI can use
const greetTool = defineTool({
name: "greet",
description: "Greet a user by name",
parameters: z.object({
name: z.string().describe("The name to greet"),
}),
execute: async ({ name }) => ({
output: `Hello, ${name}!`,
title: `Greeted ${name}`,
}),
});
// 2. Set up the provider
function App() {
return (
<OpenCodeProvider
llm={createLLMProvider({ endpoint: "/api/llm" })}
storage={new InMemoryStorageAdapter()}
tools={[greetTool]}
systemPrompt="You are a helpful assistant."
defaultModel={{ providerID: "openrouter", modelID: "anthropic/claude-sonnet-4" }}
>
<Chat />
</OpenCodeProvider>
);
}
// 3. Use the chat hook
function Chat() {
const { messages, send, isStreaming } = useChat();
const [input, setInput] = useState("");
return (
<div>
{messages.map(msg => (
<div key={msg.info.id}>
<strong>{msg.info.role}:</strong>
{msg.parts.filter(p => p.type === "text").map(p => p.text).join("")}
</div>
))}
<input value={input} onChange={e => setInput(e.target.value)} />
<button onClick={() => { send(input); setInput(""); }} disabled={isStreaming}>
Send
</button>
</div>
);
}Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ YOUR WEB APP │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ React UI │◄──►│ OpenCode │◄──►│ Your Tools │ │
│ │ │ │ Engine │ │ (addTodo, │ │
│ │ useChat() │ │ (runs in │ │ createUser, │ │
│ │ useTodos() │ │ browser) │ │ etc.) │ │
│ └──────────────┘ └──────┬───────┘ └──────────────────┘ │
│ │ │
└─────────────────────────────┼───────────────────────────────────┘
│
▼
┌──────────────────┐
│ YOUR BACKEND │
│ (Dumb Proxy) │
│ │
│ POST /api/llm │
│ │ │
│ ▼ │
│ OpenRouter / │
│ OpenAI / │
│ Anthropic │
└──────────────────┘Important: The engine runs entirely in the browser. Your backend is just a "dumb proxy" that forwards requests to an LLM provider. This is for security (API keys stay server-side) and to handle CORS.
Setting Up Your Backend
This section defines the protocol your backend needs to implement. You can:
- Name your endpoints whatever you want (
/api/llm,/chat/completions,/v1/ai, etc.) - Use any language or framework (Node.js, Python, Go, Rails, etc.)
- Integrate with your existing backend - just write an adapter that speaks this protocol
- Use any LLM provider (OpenRouter, OpenAI, Anthropic, local models, etc.)
The only requirement: Your endpoint must accept a specific request format and stream back a specific response format. That's it.
The LLM Provider Interface
When you create the LLM provider, you tell it where your endpoint is:
import { createLLMProvider } from "@opencode/browser";
// Point it at YOUR endpoint - name it whatever you want
const llm = createLLMProvider({
endpoint: "/api/llm", // or "/my-company/ai/chat" or "https://api.myapp.com/v1/completions"
// Optional: add auth headers
headers: async () => ({
Authorization: `Bearer ${await getAuthToken()}`,
}),
// Optional: use NDJSON instead of SSE
format: "sse", // or "ndjson"
});Or implement the LLMProvider interface directly for full control:
import type { LLMProvider, LLMStreamParams, LLMStreamEvent } from "@opencode/browser";
const customLLM: LLMProvider = {
async *stream(params: LLMStreamParams): AsyncIterable<LLMStreamEvent> {
// params contains: model, messages, tools, system, abortSignal
// Call your backend however you want
const response = await fetch("/your/endpoint", {
method: "POST",
body: JSON.stringify(transformToYourFormat(params)),
});
// Yield events in the expected format
yield { type: "start" };
for await (const chunk of parseYourResponse(response)) {
yield { type: "text-delta", text: chunk.text };
}
yield { type: "finish", reason: "stop", usage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 } };
}
};Protocol Specification
This is the contract between the browser and your backend. Follow this spec and everything works.
Request Format (browser → backend)
POST /your/endpoint
Content-Type: application/json
{
"model": {
"providerID": "openrouter",
"modelID": "anthropic/claude-sonnet-4"
},
"messages": [
{ "role": "user", "content": "Hello" },
{ "role": "assistant", "content": [
{ "type": "text", "text": "Hi there!" },
{ "type": "tool-call", "toolCallId": "123", "toolName": "greet", "args": { "name": "World" } }
]},
{ "role": "tool", "content": [
{ "type": "tool-result", "toolCallId": "123", "toolName": "greet", "result": "Hello, World!" }
]}
],
"tools": {
"greet": {
"name": "greet",
"description": "Greet a user",
"inputSchema": { "type": "object", "properties": { "name": { "type": "string" } } }
}
},
"system": "You are a helpful assistant."
}Response Format (backend → browser)
Your backend streams back events. You can use either format:
Option A: Server-Sent Events (SSE) - Content-Type: text/event-stream
data: {"type":"start"}
data: {"type":"text-delta","text":"Hello"}
data: {"type":"finish","reason":"stop","usage":{"promptTokens":100,"completionTokens":50,"totalTokens":150}}Option B: Newline-Delimited JSON (NDJSON) - Content-Type: application/x-ndjson
{"type":"start"}
{"type":"text-delta","text":"Hello"}
{"type":"finish","reason":"stop","usage":{"promptTokens":100,"completionTokens":50,"totalTokens":150}}Stream Event Types
These are the events your backend needs to emit. Most are optional - at minimum you need text-delta and finish.
| Event | Required | Description |
|-------|----------|-------------|
| start | No | Stream started (nice to have) |
| text-start | No | Text content beginning |
| text-delta | Yes | Chunk of text: { text: string } |
| text-end | No | Text content finished |
| tool-call-start | No | Tool call beginning: { toolCallId, toolName } |
| tool-call-delta | No | Tool args chunk: { toolCallId, argsText } |
| tool-call | Yes* | Complete tool call: { toolCallId, toolName, args } |
| finish | Yes | Stream complete: { reason, usage } |
| error | No | Error occurred: { error: { message } } |
*Required if the LLM wants to call tools
TypeScript Types for Events
type LLMStreamEvent =
| { type: "start" }
| { type: "text-start" }
| { type: "text-delta"; text: string }
| { type: "text-end" }
| { type: "tool-call-start"; toolCallId: string; toolName: string }
| { type: "tool-call-delta"; toolCallId: string; argsText: string }
| { type: "tool-call"; toolCallId: string; toolName: string; args: Record<string, unknown> }
| { type: "finish"; reason: "stop" | "tool-calls"; usage: { promptTokens: number; completionTokens: number; totalTokens: number } }
| { type: "error"; error: { message: string } };Example Implementations
Node.js + OpenRouter
// server.ts
import { createServer } from "http";
const OPENROUTER_KEY = process.env.OPENROUTER_API_KEY;
createServer(async (req, res) => {
if (req.method !== "POST" || req.url !== "/api/llm") {
res.writeHead(404);
res.end();
return;
}
// Parse request
let body = "";
for await (const chunk of req) body += chunk;
const params = JSON.parse(body);
// Set up SSE
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
});
const send = (event: any) => res.write(`data: ${JSON.stringify(event)}\n\n`);
// Convert to OpenAI format and call OpenRouter
const messages = [
{ role: "system", content: params.system },
...convertMessages(params.messages),
];
const tools = Object.values(params.tools).map(t => ({
type: "function",
function: { name: t.name, description: t.description, parameters: t.inputSchema },
}));
send({ type: "start" });
const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${OPENROUTER_KEY}`,
},
body: JSON.stringify({
model: params.model.modelID,
messages,
tools: tools.length > 0 ? tools : undefined,
stream: true,
}),
});
// Stream the response back, converting OpenAI format to our format
// See example/server.ts for full implementation
res.end();
}).listen(3001);See example/server.ts for a complete implementation.
Python + FastAPI + OpenAI
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from openai import OpenAI
import json
app = FastAPI()
client = OpenAI()
@app.post("/api/llm")
async def llm_proxy(request: Request):
params = await request.json()
async def generate():
# Convert to OpenAI format
messages = [{"role": "system", "content": params["system"]}]
for msg in params["messages"]:
messages.append(convert_message(msg))
tools = [
{"type": "function", "function": {"name": t["name"], "description": t["description"], "parameters": t["inputSchema"]}}
for t in params["tools"].values()
]
yield f"data: {json.dumps({'type': 'start'})}\n\n"
stream = client.chat.completions.create(
model=params["model"]["modelID"],
messages=messages,
tools=tools if tools else None,
stream=True
)
for chunk in stream:
delta = chunk.choices[0].delta
if delta.content:
yield f"data: {json.dumps({'type': 'text-delta', 'text': delta.content})}\n\n"
if delta.tool_calls:
for tc in delta.tool_calls:
if tc.id:
yield f"data: {json.dumps({'type': 'tool-call-start', 'toolCallId': tc.id, 'toolName': tc.function.name})}\n\n"
if tc.function.arguments:
yield f"data: {json.dumps({'type': 'tool-call-delta', 'toolCallId': tc.id, 'argsText': tc.function.arguments})}\n\n"
yield f"data: {json.dumps({'type': 'finish', 'reason': 'stop', 'usage': {'promptTokens': 0, 'completionTokens': 0, 'totalTokens': 0}})}\n\n"
return StreamingResponse(generate(), media_type="text/event-stream")Adapting an Existing Endpoint
Already have an LLM endpoint? Write an adapter:
import type { LLMProvider, LLMStreamParams, LLMStreamEvent } from "@opencode/browser";
// Your existing endpoint returns a different format? No problem.
const adaptedLLM: LLMProvider = {
async *stream(params: LLMStreamParams): AsyncIterable<LLMStreamEvent> {
// Transform our format to your existing API's format
const response = await fetch("/your/existing/endpoint", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
// Map to whatever your API expects
prompt: params.messages,
model: params.model.modelID,
functions: Object.values(params.tools).map(t => ({
name: t.name,
description: t.description,
parameters: t.inputSchema,
})),
}),
});
yield { type: "start" };
// Parse your API's response format
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Transform your format to our events
// This depends on what your API returns
const parsed = JSON.parse(chunk);
if (parsed.text) {
yield { type: "text-delta", text: parsed.text };
}
if (parsed.function_call) {
yield {
type: "tool-call",
toolCallId: parsed.function_call.id,
toolName: parsed.function_call.name,
args: JSON.parse(parsed.function_call.arguments),
};
}
}
yield {
type: "finish",
reason: "stop",
usage: { promptTokens: 0, completionTokens: 0, totalTokens: 0 },
};
},
};
// Use it
<OpenCodeProvider llm={adaptedLLM} /* ... */ />Defining Tools
Tools are how the AI interacts with your app. Each tool has:
- A name and description (tells the AI what it does)
- Parameters (Zod schema defining the input)
- An execute function (what actually happens)
Basic Tool
import { defineTool } from "@opencode/browser";
import { z } from "zod";
const addTodoTool = defineTool({
name: "addTodo",
description: "Add a new todo item to the list",
parameters: z.object({
text: z.string().describe("The todo text"),
priority: z.enum(["low", "medium", "high"]).optional().describe("Priority level"),
}),
execute: async ({ text, priority = "medium" }, ctx) => {
// ctx contains sessionID, messageID, abort signal, etc.
const todo = myTodoStore.add(text, priority);
return {
output: `Added todo: "${text}" with ${priority} priority`,
title: `Added: ${text}`, // Short title shown in UI
metadata: { todoId: todo.id }, // Optional metadata
};
},
});Tool with Permission Request
Some actions should ask the user first:
const deleteAllTool = defineTool({
name: "deleteAllTodos",
description: "Delete all todos - use with caution!",
parameters: z.object({}),
execute: async (_, ctx) => {
// Ask for permission before destructive action
await ctx.ask({
permission: "delete_all",
metadata: { count: myTodoStore.count() },
});
// If we get here, user approved
myTodoStore.clear();
return { output: "All todos deleted", title: "Cleared todos" };
},
});Tool with Progress Updates
const importTool = defineTool({
name: "importTodos",
description: "Import todos from a file",
parameters: z.object({ url: z.string() }),
execute: async ({ url }, ctx) => {
await ctx.metadata({ title: "Fetching file..." });
const data = await fetch(url).then(r => r.json());
await ctx.metadata({ title: `Importing ${data.length} items...` });
for (const item of data) {
myTodoStore.add(item.text);
}
return { output: `Imported ${data.length} todos`, title: "Import complete" };
},
});DOM Tools (Built-in)
The package includes tools for reading the DOM:
import { readDOMTool, getPageInfoTool, queryElementsTool } from "@opencode/browser/tools";
// Include these in your tools array
const tools = [
addTodoTool,
readDOMTool, // Read the full page HTML
getPageInfoTool, // Get page title, URL, meta info
queryElementsTool // Query specific elements
];Guide Tool (Show Users Where Things Are)
Create a tool that highlights UI elements:
const guideTool = defineTool({
name: "guide",
description: "Highlight a UI element to show the user where something is",
parameters: z.object({
selector: z.string().describe("CSS selector for the element"),
message: z.string().describe("Message to show the user"),
}),
execute: async ({ selector, message }) => {
const el = document.querySelector(selector);
if (!el) return { output: `Element not found: ${selector}`, title: "Guide" };
// Add highlight effect
const highlight = document.createElement("div");
highlight.style.cssText = `
position: fixed;
inset: 0;
background: rgba(0,0,0,0.5);
z-index: 9999;
pointer-events: none;
`;
document.body.appendChild(highlight);
const rect = el.getBoundingClientRect();
const spotlight = document.createElement("div");
spotlight.style.cssText = `
position: fixed;
left: ${rect.left - 8}px;
top: ${rect.top - 8}px;
width: ${rect.width + 16}px;
height: ${rect.height + 16}px;
border: 3px solid #3b82f6;
border-radius: 8px;
box-shadow: 0 0 0 9999px rgba(0,0,0,0.5);
z-index: 10000;
`;
document.body.appendChild(spotlight);
// Auto-remove after 3 seconds
setTimeout(() => {
highlight.remove();
spotlight.remove();
}, 3000);
return { output: message, title: "Showing: " + selector };
},
});Defining Skills
A Skill tells the AI about your app and how to use it. Think of it as the AI's instruction manual for your specific application.
Basic Skill
import { defineAppSkill } from "@opencode/browser";
const todoAppSkill = defineAppSkill({
name: "todo-app",
description: "A todo list application",
// Instructions for the AI
instructions: `
You are helping the user manage their todo list.
## Available Actions
- Use 'addTodo' to create new todos
- Use 'listTodos' to see all todos
- Use 'toggleTodo' to mark items complete/incomplete
- Use 'deleteTodo' to remove items
## UI Guidance
- The todo input is at the top with id="todo-input"
- Each todo has a checkbox and delete button
- Use 'guide' to highlight elements when explaining the UI
## Behavior
- Be concise and helpful
- After adding todos, briefly confirm what was added
- If the user asks where something is, use the guide tool
`,
// Define API endpoints (optional - for more complex apps)
endpoints: [
{
method: "GET",
path: "/api/todos",
description: "List all todos",
},
{
method: "POST",
path: "/api/todos",
description: "Create a new todo",
body: { text: "string", priority: "low|medium|high" },
},
],
});Using Skills with the Provider
import { OpenCodeProvider } from "@opencode/browser/react";
import { SkillRegistry, defaultAgentRegistry } from "@opencode/browser";
// Register the skill
SkillRegistry.register(todoAppSkill);
// The skill instructions get added to the system prompt
<OpenCodeProvider
skills={[todoAppSkill]}
// ... other props
>Skill with Authentication
For apps that need auth to call APIs:
const myAppSkill = defineAppSkill({
name: "my-app",
description: "My authenticated app",
instructions: "...",
// Auth configuration
auth: {
type: "bearer",
getToken: async () => {
return localStorage.getItem("auth_token");
},
},
endpoints: [
{
method: "GET",
path: "/api/user/profile",
description: "Get current user profile",
requiresAuth: true,
},
],
});React Integration
Available Hooks
useChat() - Simplified chat interface
const {
messages, // MessageWithParts[] - all messages
status, // { type: "idle" | "busy" }
isStreaming, // boolean - is AI currently responding
send, // (text: string) => Promise<void>
abort, // () => void - cancel current request
clear, // () => Promise<void> - start new session
sessionID, // string | null
} = useChat();useSession(sessionID) - Full session control
const {
session, // Session | null
messages, // MessageWithParts[]
status, // SessionStatus
loading, // boolean
isStreaming, // boolean
send, // (text: string, options?) => Promise<void>
abort, // () => void
refresh, // () => Promise<void>
} = useSession(sessionID);useSessions() - Manage multiple sessions
const {
sessions, // Session[]
loading, // boolean
error, // Error | null
refresh, // () => Promise<void>
create, // (title?) => Promise<Session>
delete, // (id) => Promise<void>
} = useSessions();useTodos(sessionID) - Track AI todos
When the AI uses the todoWrite tool, you can display them:
const { todos } = useTodos(sessionID);
// todos: Array<{ id, content, status, priority }>usePermission() - Handle permission requests
const {
pending, // PermissionRequest[]
reply, // (requestID, "always" | "once" | "reject") => void
} = usePermission();
// Show dialog when pending.length > 0useQuestion() - Handle AI questions
When the AI needs to ask the user something:
const {
pending, // QuestionRequest[]
reply, // (requestID, answers: string[][]) => void
reject, // (requestID) => void
} = useQuestion();Rendering Messages
function MessageList() {
const { messages } = useChat();
return (
<div>
{messages.map(msg => (
<div key={msg.info.id} className={msg.info.role}>
{msg.parts.map(part => {
switch (part.type) {
case "text":
return <p key={part.id}>{part.text}</p>;
case "tool":
return (
<div key={part.id} className="tool-call">
<strong>{part.tool}</strong>
{part.state.status === "running" && <span>Running...</span>}
{part.state.status === "completed" && (
<pre>{part.state.output}</pre>
)}
{part.state.status === "error" && (
<span className="error">{part.state.error}</span>
)}
</div>
);
case "reasoning":
return (
<details key={part.id}>
<summary>Thinking...</summary>
<p>{part.text}</p>
</details>
);
default:
return null;
}
})}
</div>
))}
</div>
);
}Complete Example
Here's a complete todo app with AI integration:
// App.tsx
import { useState } from "react";
import { OpenCodeProvider, useChat, usePermission, useQuestion } from "@opencode/browser/react";
import { defineTool, createLLMProvider, InMemoryStorageAdapter, defineAppSkill } from "@opencode/browser";
import { readDOMTool } from "@opencode/browser/tools";
import { z } from "zod";
// ============================================================================
// Todo Store
// ============================================================================
type Todo = { id: string; text: string; done: boolean };
const todos: Todo[] = [];
function addTodo(text: string): Todo {
const todo = { id: crypto.randomUUID(), text, done: false };
todos.push(todo);
return todo;
}
function toggleTodo(id: string): boolean {
const todo = todos.find(t => t.id === id);
if (todo) todo.done = !todo.done;
return todo?.done ?? false;
}
function deleteTodo(id: string): boolean {
const idx = todos.findIndex(t => t.id === id);
if (idx >= 0) { todos.splice(idx, 1); return true; }
return false;
}
// ============================================================================
// Tools
// ============================================================================
const addTodoTool = defineTool({
name: "addTodo",
description: "Add a new todo item",
parameters: z.object({
text: z.string().describe("The todo text"),
}),
execute: async ({ text }) => {
const todo = addTodo(text);
return { output: `Added: "${text}"`, title: `Added todo` };
},
});
const listTodosTool = defineTool({
name: "listTodos",
description: "List all todos",
parameters: z.object({}),
execute: async () => {
if (todos.length === 0) return { output: "No todos yet!", title: "Listed" };
const list = todos.map((t, i) => `${i + 1}. [${t.done ? "x" : " "}] ${t.text}`).join("\n");
return { output: list, title: `${todos.length} todos` };
},
});
const toggleTodoTool = defineTool({
name: "toggleTodo",
description: "Toggle a todo's completion status",
parameters: z.object({
index: z.number().describe("The 1-based index of the todo"),
}),
execute: async ({ index }) => {
const todo = todos[index - 1];
if (!todo) return { output: `Todo #${index} not found`, title: "Error" };
toggleTodo(todo.id);
return { output: `Toggled: "${todo.text}" → ${todo.done ? "done" : "not done"}`, title: "Toggled" };
},
});
const guideTool = defineTool({
name: "guide",
description: "Highlight a UI element to show the user where it is",
parameters: z.object({
selector: z.string().describe("CSS selector"),
message: z.string().describe("Explanation message"),
}),
execute: async ({ selector, message }) => {
const el = document.querySelector(selector);
if (!el) return { output: `Not found: ${selector}`, title: "Guide" };
el.scrollIntoView({ behavior: "smooth", block: "center" });
(el as HTMLElement).style.outline = "3px solid #3b82f6";
(el as HTMLElement).style.outlineOffset = "2px";
setTimeout(() => {
(el as HTMLElement).style.outline = "";
(el as HTMLElement).style.outlineOffset = "";
}, 3000);
return { output: message, title: "Highlighted" };
},
});
// ============================================================================
// Skill
// ============================================================================
const todoSkill = defineAppSkill({
name: "todo-app",
description: "Simple todo list",
instructions: `
You help users manage their todo list.
TOOLS:
- addTodo: Create a new todo
- listTodos: Show all todos
- toggleTodo: Mark done/undone (use 1-based index)
- guide: Highlight UI elements
- readDOM: Read the page HTML
BEHAVIOR:
- Be concise
- After actions, confirm what happened
- Use 'guide' when asked "where is X" or "how do I X"
- The input field has id="todo-input"
- The add button has id="add-btn"
`,
});
// ============================================================================
// App
// ============================================================================
const tools = [addTodoTool, listTodosTool, toggleTodoTool, guideTool, readDOMTool];
export default function App() {
return (
<OpenCodeProvider
llm={createLLMProvider({ endpoint: "http://localhost:3001/api/llm" })}
storage={new InMemoryStorageAdapter()}
tools={tools}
skills={[todoSkill]}
systemPrompt="You are a helpful todo assistant."
defaultModel={{ providerID: "openrouter", modelID: "anthropic/claude-sonnet-4" }}
>
<div style={{ display: "flex", height: "100vh" }}>
<TodoList />
<ChatPanel />
</div>
</OpenCodeProvider>
);
}
function TodoList() {
const [input, setInput] = useState("");
const [, forceUpdate] = useState(0);
const handleAdd = () => {
if (input.trim()) {
addTodo(input);
setInput("");
forceUpdate(n => n + 1);
}
};
return (
<div style={{ flex: 1, padding: 20 }}>
<h1>My Todos</h1>
<div>
<input
id="todo-input"
value={input}
onChange={e => setInput(e.target.value)}
onKeyDown={e => e.key === "Enter" && handleAdd()}
placeholder="What needs doing?"
/>
<button id="add-btn" onClick={handleAdd}>Add</button>
</div>
<ul>
{todos.map(todo => (
<li key={todo.id} style={{ textDecoration: todo.done ? "line-through" : "none" }}>
<input type="checkbox" checked={todo.done} onChange={() => { toggleTodo(todo.id); forceUpdate(n => n + 1); }} />
{todo.text}
<button onClick={() => { deleteTodo(todo.id); forceUpdate(n => n + 1); }}>×</button>
</li>
))}
</ul>
</div>
);
}
function ChatPanel() {
const [input, setInput] = useState("");
const { messages, send, isStreaming } = useChat();
const { pending: permissions, reply: replyPerm } = usePermission();
const { pending: questions, reply: replyQ, reject: rejectQ } = useQuestion();
return (
<div style={{ width: 400, borderLeft: "1px solid #ccc", display: "flex", flexDirection: "column" }}>
<div style={{ flex: 1, overflow: "auto", padding: 10 }}>
{messages.map(msg => (
<div key={msg.info.id} style={{ marginBottom: 10, textAlign: msg.info.role === "user" ? "right" : "left" }}>
{msg.parts.filter(p => p.type === "text").map(p => (
<div key={p.id} style={{
display: "inline-block",
padding: "8px 12px",
borderRadius: 8,
background: msg.info.role === "user" ? "#3b82f6" : "#e5e7eb",
color: msg.info.role === "user" ? "white" : "black",
}}>
{p.text}
</div>
))}
{msg.parts.filter(p => p.type === "tool").map(p => (
<div key={p.id} style={{ fontSize: 12, color: "#666", fontStyle: "italic" }}>
🔧 {p.tool}: {p.state.status === "completed" ? p.state.output : p.state.status}
</div>
))}
</div>
))}
</div>
{permissions.length > 0 && (
<div style={{ padding: 10, background: "#fef3c7" }}>
<p>Permission: {permissions[0].permission}</p>
<button onClick={() => replyPerm(permissions[0].id, "once")}>Allow</button>
<button onClick={() => replyPerm(permissions[0].id, "reject")}>Deny</button>
</div>
)}
{questions.length > 0 && (
<div style={{ padding: 10, background: "#dbeafe" }}>
<p>{questions[0].questions[0]?.question}</p>
{questions[0].questions[0]?.options.map(opt => (
<button key={opt.label} onClick={() => replyQ(questions[0].id, [[opt.label]])}>{opt.label}</button>
))}
<button onClick={() => rejectQ(questions[0].id)}>Cancel</button>
</div>
)}
<form onSubmit={e => { e.preventDefault(); send(input); setInput(""); }} style={{ display: "flex", padding: 10 }}>
<input value={input} onChange={e => setInput(e.target.value)} style={{ flex: 1 }} placeholder="Ask me anything..." />
<button type="submit" disabled={isStreaming}>{isStreaming ? "..." : "Send"}</button>
</form>
</div>
);
}API Reference
Core Exports (@opencode/browser)
| Export | Description |
|--------|-------------|
| defineTool(config) | Create a tool definition |
| defineAppSkill(config) | Create a skill definition |
| createLLMProvider(config) | Create an LLM provider |
| InMemoryStorageAdapter | In-memory message storage |
| createFetchStorageAdapter(config) | Remote storage adapter |
| Bus | Event pub/sub system |
| Events | All event types |
| Identifier | ID generation utilities |
React Exports (@opencode/browser/react)
| Export | Description |
|--------|-------------|
| OpenCodeProvider | Context provider component |
| useOpenCode() | Access raw context |
| useChat() | Simplified chat interface |
| useSession(id) | Full session control |
| useSessions() | Manage multiple sessions |
| useTodos(sessionID) | Track AI task list |
| usePermission() | Handle permission dialogs |
| useQuestion() | Handle AI questions |
| useEvents(types?) | Subscribe to events |
Tool Exports (@opencode/browser/tools)
| Export | Description |
|--------|-------------|
| readDOMTool | Read page HTML |
| getPageInfoTool | Get page metadata |
| queryElementsTool | Query DOM elements |
| todoWriteTool | Write AI task list |
| todoReadTool | Read AI task list |
| questionTool | Ask user questions |
| createTaskTool(registry) | Create sub-agent tasks |
Troubleshooting
"Session already running"
The previous request didn't complete. Call abort() before sending a new message, or wait for isStreaming to be false.
Tools not being called
Make sure your tool descriptions are clear. The AI decides which tools to use based on the description. Also check that the tool is included in the tools array.
"User not found" / 401 errors
Your LLM API key is invalid or expired. Check your backend proxy configuration and verify your API key at your provider's dashboard.
Messages not updating in UI
Make sure you're using the hooks correctly. The hooks subscribe to events internally - if you're managing state manually, use Bus.subscribe().
Second message doesn't work / hangs
Check the browser console for errors. Common causes:
- API key issues (401 errors)
- The error isn't being displayed in the UI - add error handling to your chat component
License
MIT
