@agentspec/codegen
v0.2.4
Published
AgentSpec provider-agnostic code generation: supports Claude subscription, any OpenAI-compatible endpoint, and the Anthropic API
Readme
@agentspec/codegen
Provider-agnostic code generation for AgentSpec. Reads an agent.yaml manifest and generates complete, runnable agent code for any supported framework.
Install
npm install @agentspec/codegenQuick Start
import { generateCode, resolveProvider } from '@agentspec/codegen'
import { loadManifest } from '@agentspec/sdk'
const { manifest } = loadManifest('./agent.yaml')
const provider = resolveProvider() // auto-detects Claude CLI > OpenAI-compatible > Anthropic API
const result = await generateCode(manifest, {
framework: 'langgraph',
provider,
})
console.log(Object.keys(result.files)) // ['agent.py', 'tools.py', ...]Providers
Three built-in providers, auto-detected in priority order:
| Provider | Class | Requires |
|----------|-------|----------|
| Claude subscription | ClaudeSubscriptionProvider | claude CLI authenticated |
| OpenAI-compatible | OpenAICompatibleProvider | AGENTSPEC_LLM_API_KEY + AGENTSPEC_LLM_MODEL |
| Anthropic API | AnthropicApiProvider | ANTHROPIC_API_KEY env var |
The OpenAI-compatible provider works with any endpoint that speaks the OpenAI wire format: OpenRouter, Groq, Together, Ollama, Nvidia NIM, OpenAI.com, and others. Set AGENTSPEC_LLM_BASE_URL to point at a non-OpenAI endpoint.
Auto-detection
import { resolveProvider } from '@agentspec/codegen'
const provider = resolveProvider() // auto-detect
const provider = resolveProvider('openai-compatible') // force specific providerOverride via env var: AGENTSPEC_CODEGEN_PROVIDER=openai-compatible. Valid values: auto, claude-sub, claude-subscription, openai-compatible, anthropic-api.
Direct instantiation
import { AnthropicApiProvider, OpenAICompatibleProvider } from '@agentspec/codegen'
// Anthropic
const anthropic = new AnthropicApiProvider('sk-ant-...', 'https://proxy.example.com')
// OpenAI-compatible (e.g. OpenRouter)
const openrouter = new OpenAICompatibleProvider(
'sk-or-v1-...',
'qwen/qwen3-235b-a22b',
'https://openrouter.ai/api/v1',
)Frameworks
List available frameworks at runtime:
import { listFrameworks } from '@agentspec/codegen'
console.log(listFrameworks()) // ['langgraph', 'crewai', 'mastra', ...]Add a new framework by creating a skill file in src/skills/<name>.md — no TypeScript code needed.
Streaming
Stream generation progress via onChunk:
const result = await generateCode(manifest, {
framework: 'langgraph',
provider,
onChunk: (chunk) => {
if (chunk.type === 'delta') {
process.stdout.write(chunk.text)
}
},
})Chunk types:
delta— text fragment withtext,accumulated, andelapsedSecheartbeat— keep-alive withelapsedSecdone— final result withresultstring andelapsedSec
Utilities
collect(stream)
Drain a provider stream to a single string:
import { collect, resolveProvider } from '@agentspec/codegen'
const provider = resolveProvider()
const text = await collect(provider.stream(systemPrompt, userPrompt, {}))repairYaml(provider, yaml, errors)
Ask the LLM to fix schema validation errors in an agent.yaml:
import { repairYaml, resolveProvider } from '@agentspec/codegen'
const fixed = await repairYaml(resolveProvider(), badYaml, validationErrors)probeProviders()
Diagnostic probe for all codegen providers (used by agentspec provider-status):
import { probeProviders } from '@agentspec/codegen'
const report = await probeProviders()
console.log(report.results) // ProviderProbeResult[]: one per probe
console.log(report.env.resolvedProvider) // 'claude-subscription' | 'openai-compatible' | 'anthropic-api' | nullError Handling
All errors are typed as CodegenError with a code property:
import { CodegenError } from '@agentspec/codegen'
try {
await generateCode(manifest, { framework: 'langgraph', provider })
} catch (err) {
if (err instanceof CodegenError) {
console.error(err.code, err.message)
// err.code: 'auth_failed' | 'generation_failed' | 'parse_failed' | ...
}
}Error codes: auth_failed, quota_exceeded, rate_limited, model_not_found, generation_failed, parse_failed, provider_unavailable, response_invalid
