npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

jsx-ai

v0.1.5

Published

JSX interface for structured LLM calls — tools, messages, and prompts as composable components

Downloads

1,014

Readme

jsx-ai

npm bundle

JSX interface for structured LLM calls. Tools, messages, and prompts become composable components.

import { callLLM } from "jsx-ai"

const result = await callLLM(
  <>
    <system>You are a coding agent</system>
    <tool name="exec" description="Run a shell command">
      <param name="command" type="string" required>The command to run</param>
    </tool>
    <message role="user">List all TypeScript files</message>
  </>,
  { model: "gemini-2.5-flash" }
)

result.toolCalls  // [{ name: "exec", args: { command: "find . -name '*.ts'" } }]
result.text       // ""
result.usage      // { inputTokens: 42, outputTokens: 15 }

Why JSX?

Before — tools as JSON schemas, stringly-typed, not reusable:

const response = await fetch(url, {
  body: JSON.stringify({
    model: "gemini-2.5-flash",
    systemInstruction: { parts: [{ text: "You are a coding agent" }] },
    tools: [{ functionDeclarations: [{
      name: "exec",
      description: "Run a shell command",
      parameters: { type: "object", properties: {
        command: { type: "string", description: "The command to run" }
      }, required: ["command"] }
    }] }],
    contents: [{ role: "user", parts: [{ text: "List all TypeScript files" }] }],
  })
})
const data = await response.json()
const toolCall = data.candidates[0].content.parts[0].functionCall

After — same call, composable and provider-agnostic:

const ExecTool = () => (
  <tool name="exec" description="Run a shell command">
    <param name="command" type="string" required>The command to run</param>
  </tool>
)

const result = await callLLM(
  <>
    <system>You are a coding agent</system>
    <ExecTool />
    <message role="user">List all TypeScript files</message>
  </>,
  { model: "gemini-2.5-flash" }  // or "gpt-4o" or "claude-3-sonnet-20240229"
)

result.toolCalls  // [{ name: "exec", args: { command: "find . -name '*.ts'" } }]

Installation

bun add jsx-ai
# or: npm install jsx-ai

Add to tsconfig.json:

{
  "compilerOptions": {
    "jsx": "react-jsx",
    "jsxImportSource": "jsx-ai"
  }
}

✨ What You Get

  • Multi-provider → Gemini, OpenAI, Anthropic, DeepSeek — auto-detected from model name
  • 5 strategies → native FC, NLT, XML, natural, hybrid — same prompt, different encodings
  • Composable → tools and prompts are reusable JSX components
  • Skills → two-phase skill loading from .md files (discovery → resolution)
  • Type-safe → full TypeScript types, custom JSX runtime (not React)
  • Benchmarked → multi-turn agentic scenarios scored per strategy

🔌 Providers

Auto-detected from model name. Override with { provider: "openai" }.

| Model | Provider | Auth | Env var | |-------|----------|------|---------| | gemini-* | Gemini | x-goog-api-key | GEMINI_API_KEY | | gpt-*, o4-* | OpenAI | Bearer | OPENAI_API_KEY | | claude-* | Anthropic | x-api-key + version | ANTHROPIC_API_KEY | | deepseek-* | OpenAI (compat) | Bearer | DEEPSEEK_API_KEY |

// Gemini (default)
await callLLM(<>...</>, { model: "gemini-2.5-flash" })

// OpenAI
await callLLM(<>...</>, { model: "gpt-4o" })

// Anthropic
await callLLM(<>...</>, { model: "claude-3-sonnet-20240229" })

Provider nuances handled automatically:

  • Gemini: merges consecutive same-role messages (API rejects them otherwise)
  • OpenAI o4-*: uses max_completion_tokens + forced temperature=1.0
  • Anthropic: system prompt as top-level field, tool_use blocks, input_schema
  • DeepSeek: routes to api.deepseek.com with OpenAI-compatible format

Custom providers

import { registerProvider } from "jsx-ai"
import type { Provider } from "jsx-ai"

class MyProvider implements Provider {
  name = "custom"
  buildRequest(prepared, model, apiKey) { /* ... */ }
  parseResponse(data) { /* ... */ }
}

registerProvider("custom", new MyProvider())
await callLLM(<>...</>, { provider: "custom", model: "my-model" })

🎯 Strategies

Same JSX prompt, different tool encodings. Each strategy controls how tools appear to the model and how responses are parsed.

| Strategy | Tools sent as | Response parsed from | Best for | |----------|---------------|---------------------|----------| | native | API tools field | Structured FC | Single tool calls, lowest tokens | | nlt | Text descriptions + native FC | Structured FC | Multi-turn agentic loops | | xml | Text with XML schema | XML in text | Multi-tool batching | | natural | Text descriptions | Action blocks in text | Complex reasoning + tools | | hybrid | API tools + text schema | Either | Balanced |

// Strategy via options
await callLLM(<>...</>, { strategy: "nlt" })

// Or register a custom one
import { registerStrategy } from "jsx-ai"
registerStrategy("my-strategy", { prepare, parseResponse })

Benchmark results (gemini-2.5-flash, kv-store scenario)

3-turn agentic loop: Plan → Execute → Adapt

| Strategy | Turn 1 (Plan) | Turn 2 (Execute) | Turn 3 (Adapt) | Total | |----------|:---:|:---:|:---:|:---:| | nlt | 100% | 73% | 84% | 86% | | natural | 100% | 67% | 69% | 79% | | native | 46% | 5% | 33% | 28% |

Native FC underperforms in agentic loops because it batches homogeneous tool calls — calling 5× use_skill but skipping set_objectives in the same turn.

📦 JSX Elements

| Element | Props | Description | |---------|-------|-------------| | <system> | — | System instruction (text children) | | <tool> | name, description | Tool/function declaration | | <param> | name, type, required, enum | Tool parameter (children = description) | | <message> | role (user | assistant) | Conversation message | | <prompt> | model, temperature, maxTokens, strategy | Optional config wrapper |

🧠 Skills

Two-phase skill loading from .md files with YAML frontmatter:

---
name: bun-expert
description: Bun runtime expertise — Bun.serve(), bun:sqlite, bun:test
---
## Bun Runtime
- HTTP: Bun.serve() with export default { port, fetch } pattern
- Database: import { Database } from "bun:sqlite"
- Testing: import { describe, it, expect } from "bun:test"

Phase 1 — Discovery: skills appear as a lightweight catalog

import { Skill, UseSkillTool } from "jsx-ai"

await callLLM(
  <>
    <Skill path="skills/bun-expert.md" />
    <Skill path="skills/security.md" />
    <UseSkillTool />
    <message role="user">Build a KV store API</message>
  </>
)
// Model sees: "Available skill: bun-expert — Bun runtime expertise"
// Model calls: use_skill({ skill_name: "bun-expert" })

Phase 2 — Resolution: requested skills expand to full content

import { Skill, resolveSkills } from "jsx-ai"

const resolved = resolveSkills(skillPaths, ["bun-expert"])

await callLLM(
  <>
    <Skill path="skills/bun-expert.md" resolve />
    <Skill path="skills/security.md" />
    <message role="user">Now implement it</message>
  </>
)
// Model sees full bun-expert methodology + just the catalog entry for security

🔍 render(tree)

Inspect the extracted prompt without calling the LLM:

import { render } from "jsx-ai"

const extracted = render(
  <>
    <system>You are helpful</system>
    <tool name="exec" description="Run command">
      <param name="command" type="string" required>Command</param>
    </tool>
    <message role="user">List files</message>
  </>
)

extracted.tools     // [{ name: "exec", parameters: { ... } }]
extracted.messages  // [{ role: "user", content: "List files" }]
extracted.system    // "You are helpful"

⚙️ CallOptions

| Field | Type | Default | Description | |-------|------|---------|-------------| | model | string | "gemini-2.5-flash" | Model name (also determines provider) | | provider | "gemini" \| "openai" \| "anthropic" | auto-detected | Force a specific provider | | strategy | "native" \| "nlt" \| "xml" \| "natural" \| "hybrid" | "auto" | Tool encoding strategy | | apiKey | string | from env | Override API key | | temperature | number | 0.1 | Sampling temperature | | maxTokens | number | 4000 | Max output tokens |

💬 callText(model, messages, options?)

Simple text-in/text-out LLM call — no JSX needed. Uses the same provider routing and auth:

import { callText } from "jsx-ai"

const text = await callText("gemini-2.5-flash", [
  { role: "system", content: "You are a planner. Break tasks into steps." },
  { role: "user", content: "Build a REST API with authentication" },
])

console.log(text)  // "1. Set up project with Bun.serve()..."

🔄 streamLLM(model, messages, options?)

Stream LLM responses token-by-token via SSE. Same provider routing as callText:

import { streamLLM } from "jsx-ai"

for await (const chunk of streamLLM("gemini-2.5-flash", [
  { role: "system", content: "You are a storyteller" },
  { role: "user", content: "Tell me a short story" },
])) {
  process.stdout.write(chunk)
}

Options for both callText and streamLLM:

| Field | Type | Default | Description | |-------|------|---------|-------------| | temperature | number | 0.3 | Sampling temperature | | maxTokens | number | 8000 | Max output tokens | | apiKey | string | from env | Override API key |

License

MIT