superprompts
v0.0.4
Published
Superprompts is a library for creating and using prompts for AI models.
Maintainers
Readme
Superprompts
A TypeScript/JavaScript library for fetching and using prompts from the Superprompts API with built-in caching for optimal performance. For more information visit https://superprompts.app.
Features
- 🚀 Fast: Built-in 5-second caching to reduce API calls
- 🔧 TypeScript: Full TypeScript support with type definitions
- 📦 Lightweight: Zero dependencies, minimal bundle size
- 🎯 Simple: Easy-to-use API for prompt management
Installation
npm install superprompts
# or
yarn add superprompts
# or
pnpm add superpromptsQuick Start
import { createPromptClient } from 'superprompts';
const promptClient = createPromptClient({
apiKey: 'your-superprompts-api-key'
});
// Fetch a prompt by ID
const prompt = await promptClient('your-prompt-id');
console.log(prompt.prompt); // The actual prompt textAPI Reference
createPromptClient({ apiKey })
Creates a prompt client instance with caching enabled.
Parameters:
apiKey(string): Your Superprompts API key
Returns:
- A function that takes a
promptIdand returns a Promise with the prompt data
promptClient(promptId)
Fetches a prompt from the Superprompts API.
Parameters:
promptId(string): The ID of the prompt to fetch
Returns:
- Promise containing the prompt data with the following structure:
{ prompt: string; // The actual prompt text // ... other metadata from the API }
Examples
Basic Usage
import { createPromptClient } from 'superprompts';
const promptClient = createPromptClient({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
async function generateContent() {
try {
const promptData = await promptClient('my-prompt-id');
console.log('Retrieved prompt:', promptData.prompt);
} catch (error) {
console.error('Failed to fetch prompt:', error);
}
}Using with OpenAI npm
import { createPromptClient } from 'superprompts';
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const promptClient = createPromptClient({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
async function generateWithOpenAI() {
try {
// Fetch the prompt from Superprompts
const { prompt } = await promptClient('code-review-prompt');
// Use the prompt with OpenAI
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{
role: 'system',
content: prompt
},
{
role: 'user',
content: 'Please review this code: function add(a, b) { return a + b; }'
}
],
temperature: 0.7
});
console.log('AI Response:', completion.choices[0].message.content);
} catch (error) {
console.error('Error:', error);
}
}
generateWithOpenAI();Using with AI SDK
import { createPromptClient } from 'superprompts';
import { openai } from 'ai/openai';
import { generateText } from 'ai';
const promptClient = createPromptClient({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
async function generateWithAISDK() {
try {
// Fetch the prompt from Superprompts
const { prompt } = await promptClient('creative-writing-prompt');
// Use the prompt with AI SDK
const { text } = await generateText({
model: openai('gpt-4'),
system: prompt,
prompt: 'Write a short story about a robot learning to paint.',
temperature: 0.8
});
console.log('Generated story:', text);
} catch (error) {
console.error('Error:', error);
}
}
generateWithAISDK();Caching Example
import { createPromptClient } from 'superprompts';
const promptClient = createPromptClient({
apiKey: process.env.SUPERPROMPTS_API_KEY
});
async function demonstrateCaching() {
const promptId = 'my-prompt-id';
console.time('First fetch');
const prompt1 = await promptClient(promptId);
console.timeEnd('First fetch'); // ~200ms (API call)
console.time('Second fetch');
const prompt2 = await promptClient(promptId);
console.timeEnd('Second fetch'); // ~1ms (cached)
console.log('Same prompt?', prompt1.prompt === prompt2.prompt); // true
// Wait 6 seconds for cache to expire
await new Promise((resolve) => setTimeout(resolve, 6000));
console.time('Third fetch');
const prompt3 = await promptClient(promptId);
console.timeEnd('Third fetch'); // ~200ms (cache expired, new API call)
}
demonstrateCaching();Caching
The library includes a built-in caching mechanism with a 5-second TTL (Time To Live). This means:
- First request to a prompt ID: Fetches from API
- Subsequent requests within 5 seconds: Returns cached result
- After 5 seconds: Cache expires, next request fetches fresh data from API
This caching strategy balances performance with data freshness, ensuring you get fast responses while keeping prompts reasonably up-to-date.
Error Handling
The library throws errors for various scenarios:
try {
const prompt = await promptClient('invalid-prompt-id');
} catch (error) {
if (error.message.includes('Failed to retrieve prompt')) {
console.error('API Error:', error.message);
// Handle API errors (404, 401, 500, etc.)
}
}Environment Variables
Make sure to set your Superprompts API key:
# .env
SUPERPROMPTS_API_KEY=your-api-key-here
OPENAI_API_KEY=your-openai-api-key-hereTypeScript Support
The library is written in TypeScript and includes full type definitions:
import { createPromptClient } from 'superprompts';
// TypeScript will infer the return type
const promptClient = createPromptClient({
apiKey: 'your-key'
});
// promptData is properly typed
const promptData = await promptClient('prompt-id');
console.log(promptData.prompt); // stringContributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT © Superprompts, Inc
