@cobbl-ai/sdk
v0.1.0
Published
Cobbl SDK - Close the Loop on LLM Prompt Feedback
Maintainers
Readme
@cobbl-ai/sdk
The official TypeScript/JavaScript SDK for Cobbl - a feedback-driven PromptOps platform for LLM applications.
Features
- 🚀 Simple API - Run prompts and collect feedback with just a few lines of code
- 🔒 Type-safe - Full TypeScript support with comprehensive type definitions
- 🎯 Framework agnostic - Works with Node.js, Next.js, Express, and any JavaScript framework
- 📦 Zero config - Works out of the box with sensible defaults
- 🌐 Cross-platform - Supports both CommonJS and ES modules
- ⚡ Optimized - Minimal bundle size, tree-shakeable exports
Installation
# npm
npm install @cobbl-ai/sdk
# yarn
yarn add @cobbl-ai/sdk
# pnpm
pnpm add @cobbl-ai/sdkQuick Start
import { CobblClient } from '@cobbl-ai/sdk'
// Initialize the client
const client = new CobblClient({
apiKey: process.env.PROMPTI_API_KEY,
})
// Run a prompt
const result = await client.runPrompt('sales_summary', {
topic: 'Q4 Results',
tone: 'friendly',
audience: 'investors',
})
console.log(result.output) // AI-generated response
console.log(result.runId) // Save this to link feedback later
// Submit user feedback
await client.submitFeedback({
runId: result.runId,
helpful: 'not_helpful',
userFeedback: 'The response was too formal and lengthy',
})Configuration
Initializing the Client
import { CobblClient } from '@cobbl-ai/sdk'
const client = new CobblClient({
apiKey: 'your-api-key', // Your Cobbl API key
})Environment Variables
We recommend storing your API key in environment variables:
# .env
PROMPTI_API_KEY=your_api_key_hereThen load it in your application:
const client = new CobblClient({
apiKey: process.env.PROMPTI_API_KEY,
})API Reference
runPrompt(promptSlug, input)
Execute a prompt with the given input variables.
Parameters:
promptSlug(string): The unique slug identifier for the promptinput(PromptInput): Input variables to populate the prompt template
Returns: Promise<RunPromptResponse>
Response:
{
runId: string // Unique ID for this run - use for feedback
output: string // AI-generated response
tokenUsage: {
inputTokens: number
outputTokens: number
totalTokens: number
}
renderedPrompt: string // The prompt sent to the LLM
promptVersion: {
// Metadata about the prompt version used
id: string
versionNumber: number
template: string
variables: Array<{
key: string
type: string
required: boolean
}>
provider: 'openai' | 'anthropic' | 'google'
model: string
// ... more fields
}
}Example:
const result = await client.runPrompt('customer_email', {
customerName: 'John Doe',
issue: 'login_problem',
urgency: 'high',
})
console.log(result.output)
// => "Dear John Doe, We understand you're experiencing login issues..."submitFeedback(feedback)
Submit user feedback for a prompt run.
Parameters:
feedback(FeedbackSubmission):runId(string): The run ID from a previousrunPromptcallhelpful('helpful' | 'not_helpful'): Whether the output was helpfuluserFeedback(string): Detailed feedback message
Returns: Promise<SubmitFeedbackResponse>
Response:
{
feedbackId: string // Unique ID for the feedback item
message: string // Success message
}Example:
await client.submitFeedback({
runId: result.runId,
helpful: 'not_helpful',
userFeedback: 'The tone was too casual for a professional email',
})Advanced Usage
Error Handling
The SDK throws CobblError for all error cases. You can catch and handle these errors:
import { CobblClient, CobblError } from '@cobbl-ai/sdk'
try {
const result = await client.runPrompt('my-prompt', { topic: 'test' })
} catch (error) {
if (error instanceof CobblError) {
console.error(`Error [${error.code}]: ${error.message}`)
// Handle specific error types
switch (error.code) {
case 'UNAUTHORIZED':
console.error('Invalid API key')
break
case 'NOT_FOUND':
console.error('Prompt not found')
break
case 'INVALID_REQUEST':
console.error('Invalid request:', error.details)
break
case 'RATE_LIMIT_EXCEEDED':
console.error('Rate limit exceeded, try again later')
break
default:
console.error('Unexpected error:', error)
}
} else {
console.error('Unknown error:', error)
}
}Error Codes:
INVALID_CONFIG- Invalid SDK configurationINVALID_REQUEST- Malformed request (e.g., missing required fields)UNAUTHORIZED- Invalid or missing API keyNOT_FOUND- Resource not found (e.g., prompt doesn't exist)RATE_LIMIT_EXCEEDED- Too many requestsSERVER_ERROR- Server-side errorNETWORK_ERROR- Network connectivity issueAPI_ERROR- Other API errors
TypeScript Support
The SDK is written in TypeScript and includes full type definitions:
import type {
PromptInput,
RunPromptResponse,
FeedbackSubmission,
TokenUsage,
} from '@cobbl-ai/sdk'
// Type-safe prompt inputs
const input: PromptInput = {
topic: 'AI Safety',
tone: 'professional',
}
// Type-safe response handling
const result: RunPromptResponse = await client.runPrompt('blog_post', input)
// Access token usage
const tokens: TokenUsage = result.tokenUsage
console.log(`Used ${tokens.totalTokens} tokens`)Framework Examples
Express.js
import express from 'express'
import { CobblClient } from '@cobbl-ai/sdk'
const app = express()
const cobbl = new CobblClient({
apiKey: process.env.PROMPTI_API_KEY,
})
app.post('/api/generate', async (req, res) => {
try {
const result = await cobbl.runPrompt('content_generator', req.body)
res.json({
output: result.output,
runId: result.runId,
})
} catch (error) {
res.status(500).json({ error: 'Failed to generate content' })
}
})
app.post('/api/feedback', async (req, res) => {
try {
await cobbl.submitFeedback(req.body)
res.json({ success: true })
} catch (error) {
res.status(500).json({ error: 'Failed to submit feedback' })
}
})Next.js API Routes
// app/api/generate/route.ts
import { CobblClient } from '@cobbl-ai/sdk'
import { NextResponse } from 'next/server'
const cobbl = new CobblClient({
apiKey: process.env.PROMPTI_API_KEY!,
})
export async function POST(request: Request) {
const body = await request.json()
try {
const result = await cobbl.runPrompt('summarizer', {
text: body.text,
length: 'short',
})
return NextResponse.json(result)
} catch (error) {
return NextResponse.json(
{ error: 'Failed to generate summary' },
{ status: 500 }
)
}
}React Component (Client-Side via API)
'use client'
import { useState } from 'react'
export default function FeedbackForm({ runId }: { runId: string }) {
const [feedback, setFeedback] = useState('')
const [helpful, setHelpful] = useState<'helpful' | 'not_helpful'>('helpful')
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault()
// Call your API route that uses the SDK
await fetch('/api/feedback', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
runId,
helpful,
userFeedback: feedback
})
})
setFeedback('')
}
return (
<form onSubmit={handleSubmit}>
<div>
<label>
<input
type="radio"
value="helpful"
checked={helpful === 'helpful'}
onChange={(e) => setHelpful(e.target.value as 'helpful')}
/>
Helpful
</label>
<label>
<input
type="radio"
value="not_helpful"
checked={helpful === 'not_helpful'}
onChange={(e) => setHelpful(e.target.value as 'not_helpful')}
/>
Not Helpful
</label>
</div>
<textarea
value={feedback}
onChange={(e) => setFeedback(e.target.value)}
placeholder="Tell us what could be improved..."
/>
<button type="submit">Submit Feedback</button>
</form>
)
}Best Practices
1. Store Run IDs for Feedback
Always save the runId returned from runPrompt() so users can provide feedback later:
// Store in database with your application data
const result = await client.runPrompt('recommendation', { userId: '123' })
await db.recommendations.create({
userId: '123',
content: result.output,
promptRunId: result.runId, // ← Save this!
})2. Handle Errors Gracefully
Always wrap SDK calls in try-catch blocks and provide fallback behavior:
try {
const result = await client.runPrompt('greeting', { name: userName })
return result.output
} catch (error) {
// Log for debugging
console.error('Prompt failed:', error)
// Return fallback content
return `Hello, ${userName}!`
}3. Use Environment-Specific API Keys
Use different API keys for development, staging, and production:
const client = new CobblClient({
apiKey: process.env.PROMPTI_API_KEY,
})4. Implement Rate Limiting
Add rate limiting on your application side to avoid hitting API limits:
import rateLimit from 'express-rate-limit'
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
})
app.use('/api/generate', limiter)Development
Building from Source
# Install dependencies
pnpm install
# Build the SDK
pnpm build
# Type check
pnpm typecheck
# Clean build artifacts
pnpm cleanProject Structure
sdk/
├── src/
│ ├── client.ts # Main SDK client
│ ├── errors.ts # Error classes
│ ├── types.ts # Type definitions
│ ├── shared-types.ts # Inlined types from shared package
│ └── index.ts # Public exports
├── dist/ # Compiled output (created by build)
│ ├── index.js # CommonJS bundle
│ ├── index.mjs # ES modules bundle
│ ├── index.d.ts # TypeScript declarations (CJS)
│ ├── index.d.mts # TypeScript declarations (ESM)
│ └── *.map # Source maps
├── examples/ # Usage examples
├── tsup.config.ts # Build configuration
├── package.json
└── README.mdBuild System
This SDK uses tsup for building, which provides:
- Zero config bundling - Works out of the box
- Dual package support - Generates both CJS and ESM
- Type bundling - Inlines all type dependencies
- Tree-shaking - Removes unused code
- Source maps - For debugging
Publishing to npm
Before publishing, make sure to:
- Update the version in
package.json - Update
CHANGELOG.md - Build and test the package
# Test the package locally
pnpm pack
# This creates cobbl-ai
-sdk-0.1.0.tgz that you can test
# Publish to npm (requires npm login)
pnpm publish --access public
# Or publish with tag (for beta/alpha releases)
pnpm publish --tag beta --access publicSupport
- 📧 Email: [email protected]
- 🐛 Issues: GitHub Issues
- 📚 Documentation: docs.cobbl.ai
- 💬 Discord: Join our community
License
MIT © Cobbl
