@promptlycms/prompts
v0.2.0
Published
TypeScript SDK for Promptly CMS — fetch prompts, build Zod schemas, generate typed code
Readme
@promptlycms/prompts
TypeScript SDK for the Promptly CMS API. Stop hardcoding prompts in your codebase — manage them in a purpose-built CMS with versioning and instant publishing, then fetch them at runtime with full type safety.
- Zero hardcoded prompts — fetch prompts at runtime; update wording, models, and settings from the CMS without code changes or redeploys
- Runtime client —
getPrompt()andgetPrompts()with full TypeScript support - Codegen CLI — generates typed template variables via declaration merging
- AI SDK integration — destructure directly into Vercel AI SDK
generateText/streamText - Any AI provider — supports all providers supported by the Vercel AI SDK
- Structured output — Zod schemas built from CMS-defined output schemas
Install
npm install @promptlycms/promptsPeer dependencies:
npm install zod ai typescriptYou'll also need at least one AI provider SDK for model resolution:
# Install the provider(s) your prompts use
npm install @ai-sdk/anthropic # Claude models
npm install @ai-sdk/openai # GPT / o-series models
npm install @ai-sdk/google # Gemini models
npm install @ai-sdk/mistral # Mistral / Mixtral modelsQuick start
1. Set your API key
# .env
PROMPTLY_API_KEY=pk_live_...2. Generate types (optional but recommended)
npx promptly generateThis fetches all your prompts from the API and generates a promptly-env.d.ts file in your project root with typed autocomplete for every prompt ID and its template variables.
# Custom output path
npx promptly generate --output ./types/promptly-env.d.ts
# Pass API key directly
npx promptly generate --api-key pk_live_...3. Create a client
import { createPromptlyClient } from '@promptlycms/prompts';
const promptly = createPromptlyClient({
apiKey: process.env.PROMPTLY_API_KEY,
});Fetching prompts
Single prompt
const result = await promptly.getPrompt('JPxlUpstuhXB5OwOtKPpj');
// Access prompt metadata
result.promptId; // 'JPxlUpstuhXB5OwOtKPpj'
result.promptName; // 'Review Prompt'
result.systemMessage; // 'You are a helpful assistant.'
result.temperature; // 0.7
result.model; // LanguageModel (auto-resolved from CMS config)
// Interpolate template variables (typed if you ran codegen)
const message = result.userMessage({
pickupLocation: 'London',
items: 'sofa',
});
// Get the raw template string
const template = String(result.userMessage);
// => 'Help with ${pickupLocation} moving ${items}.'Fetch a specific version:
const result = await promptly.getPrompt('JPxlUpstuhXB5OwOtKPpj', {
version: '2.0.0',
});Batch fetch
Fetch multiple prompts in parallel with typed results per position:
import type { PromptRequest } from '@promptlycms/prompts';
const [reviewPrompt, welcomePrompt] = await promptly.getPrompts([
{ promptId: 'JPxlUpstuhXB5OwOtKPpj' },
{ promptId: 'abc123', version: '2.0.0' },
]);
// Each result is typed to its own prompt's variables
reviewPrompt.userMessage({ pickupLocation: 'London', items: 'sofa' });
welcomePrompt.userMessage({ email: '[email protected]', subject: 'Hi' });AI SDK integration
Destructure getPrompt() and pass the properties directly to Vercel AI SDK functions:
import { generateText } from 'ai';
const { userMessage, systemMessage, temperature, model } = await promptly.getPrompt('my-prompt');
const { text } = await generateText({
model,
system: systemMessage,
prompt: userMessage({ name: 'Alice', task: 'coding' }),
temperature,
});The model configured in the CMS is auto-resolved to the correct AI SDK provider.
Model auto-detection
The SDK automatically resolves models configured in the CMS to the correct AI SDK provider based on the model name prefix:
| Prefix | Provider | Package |
|--------|----------|---------|
| claude-* | Anthropic | @ai-sdk/anthropic |
| gpt-*, o1-*, o3-*, o4-*, chatgpt-* | OpenAI | @ai-sdk/openai |
| gemini-* | Google | @ai-sdk/google |
| mistral-*, mixtral-*, codestral-* | Mistral | @ai-sdk/mistral |
CMS model display names (e.g. claude-sonnet-4.6) are mapped to their full API model IDs automatically.
Custom model resolver
If you need full control over model resolution, pass a model function:
import { anthropic } from '@ai-sdk/anthropic';
const promptly = createPromptlyClient({
apiKey: process.env.PROMPTLY_API_KEY,
model: (modelId) => anthropic('claude-sonnet-4-6'),
});Type generation
Running npx promptly generate creates a promptly-env.d.ts file that uses declaration merging to type your prompts:
// Auto-generated by @promptlycms/prompts — do not edit
import '@promptlycms/prompts';
declare module '@promptlycms/prompts' {
interface PromptVariableMap {
'JPxlUpstuhXB5OwOtKPpj': {
[V in 'latest' | '2.0.0' | '1.0.0']: {
pickupLocation: string;
items: string;
};
};
'abc123': {
[V in 'latest' | '1.0.0']: {
email: string;
subject: string;
};
};
}
}With this file present, getPrompt() and getPrompts() return typed userMessage functions with autocomplete. Unknown prompt IDs fall back to Record<string, string>.
Add the generated file to version control so types are available without running codegen in CI. Re-run npx promptly generate whenever you add, remove, or rename template variables in the CMS.
Error handling
All API errors throw PromptlyError:
import { PromptlyError } from '@promptlycms/prompts';
try {
await promptly.getPrompt('nonexistent');
} catch (err) {
if (err instanceof PromptlyError) {
err.code; // 'NOT_FOUND' | 'INVALID_KEY' | 'USAGE_LIMIT_EXCEEDED' | ...
err.status; // HTTP status code
err.message; // Human-readable error message
err.usage; // Usage data (on 429s)
err.upgradeUrl; // Upgrade link (on 429s)
}
}API reference
createPromptlyClient(config?)
| Option | Type | Required | Description |
|-----------|----------|----------|----------------------------------------------------|
| apiKey | string | No | Your Promptly API key (defaults to PROMPTLY_API_KEY env var) |
| baseUrl | string | No | API base URL (default: https://api.promptlycms.com) |
| model | (modelId: string) => LanguageModel | No | Custom model resolver — overrides auto-detection |
Returns a PromptlyClient with getPrompt() and getPrompts() methods.
client.getPrompt(promptId, options?)
Fetch a single prompt. Returns PromptResult with typed userMessage when codegen types are present.
| Option | Type | Description |
|-----------|----------|----------------------|
| version | string | Specific version to fetch (default: latest) |
client.getPrompts(entries)
Fetch multiple prompts in parallel. Accepts PromptRequest[] and returns a typed tuple matching the input order.
@promptlycms/prompts/schema
Subpath export for working with Zod schemas from CMS schema fields:
import { buildZodSchema, schemaFieldsToZodSource } from '@promptlycms/prompts/schema';buildZodSchema(fields)— builds a Zod object schema at runtime fromSchemaField[]schemaFieldsToZodSource(fields)— generates Zod source code as a string for codegen
CLI: npx promptly generate
| Flag | Alias | Description |
|-------------|-------|------------------------------------------------------|
| --api-key | | API key (defaults to PROMPTLY_API_KEY env var) |
| --output | -o | Output path (default: ./promptly-env.d.ts) |
License
MIT
