typebulb
v0.4.6
Published
Local bulb runner CLI for Typebulb
Readme
typebulb
Run single-file apps from markdown. A .bulb.md file bundles code, styles, data, and config in one file. typebulb compiles and serves it locally with hot reload.
Create bulbs on typebulb.com and export, or generate .bulb.md files with any AI coding tool. See the FAQ for the bulb format and tb.* API reference.
Quick Start
A bulb is a markdown file with named code blocks:
---
format: typebulb/v1
name: My App
---
**code.tsx**
```tsx
document.getElementById("root")!.textContent = "Hello from a bulb!";
```
**index.html**
```html
<div id="root"></div>
```Run it:
npx typebulb my-app.bulb.mdOr install globally:
npm install -g typebulbUsage
typebulb <file.bulb.md> Run a bulb
typebulb . Find .bulb.md in current directory
typebulb --no-watch <file> Disable hot reload
typebulb --port 3333 <file> Custom port
typebulb --no-open <file> Don't auto-open browser
typebulb --server <file> Run server.ts only, no web server
typebulb --help Show help
typebulb --version Show versionFeatures
- Server-side code — Add a
**server.ts**section; exported functions become callable from the browser viatb.server.<name>()(e.g.,export async function query(...)→await tb.server.query(...)) - CLI logging —
tb.server.log(...)prints to the CLI's stdout - Env files —
.envand.env.localauto-loaded from cwd - Server mode —
--serverruns only the**server.ts**section in Node, skipping the web server. Bulbs with only**server.ts**(no**code.tsx**) use this mode automatically. - Filesystem access —
tb.fs.read()andtb.fs.write()for local files - Hot reload — Recompiles on save and refreshes the browser (on by default; disable with
--no-watch) - Package resolution — Client dependencies are automatically resolved by generating import maps (same resolver as typebulb.com). Server dependencies are automatically installed via npm.
- AI calls —
tb.ai()for general-purpose AI (chatbots, agents, experiments).tb.models()lists available models. Set API keys in.env(see below).
AI Setup
Bulbs can call AI providers via tb.ai(). Add API keys to your .env file:
| Provider name | API key env var |
|---------------|-----------------|
| anthropic | ANTHROPIC_API_KEY |
| openai | OPENAI_API_KEY |
| gemini | GOOGLE_API_KEY |
| openrouter | OPENROUTER_API_KEY |
Set your default provider and model:
TB_AI_PROVIDER=anthropic
TB_AI_MODEL=claude-haiku-4-5-20251001Both can be overridden per-call: tb.ai({ provider: "gemini", model: "gemini-2.5-flash", ... }).
Reasoning
tb.ai() accepts an optional reasoning parameter (0–3) that hints at how much extended thinking the model should use:
| Level | Label | Effect | |-------|-------|--------| | 0 | Min | No extended reasoning (default) | | 1 | Low | Light reasoning | | 2 | Med | Moderate reasoning | | 3 | Max | Maximum reasoning |
const { text } = await tb.ai({
messages: [{ role: "user", content: "Explain quantum tunneling" }],
reasoning: 2,
});Provider support varies — the level is mapped to provider-specific parameters (e.g. Anthropic's adaptive thinking, OpenAI's reasoning effort).
Limitations
- Inference —
tb.infer()is not supported locally. Bulbs that use inference will render but cannot run inference calls. Usetb.ai()for programmatic AI access instead.
License
MIT
