llm-biprism
v1.0.2
Published
A TypeScript library for converting between different LLM provider request/response formats.
Readme
LLM Biprism
A TypeScript library for converting between different LLM provider request/response formats.
Features
- ✅ Fluent API - Self-documenting, type-safe builder pattern
- ✅ Request Conversion - Convert between OpenAI Chat Completions and OpenAI Responses formats
- ✅ Response Conversion - Convert between OpenAI response formats
- ✅ Type-Safe - Full TypeScript support with proper type inference
- ✅ Warning System - Alerts you to lossy conversions
- ✅ Extensible - Easy to add new providers and formats
Installation
npm install llm-biprismQuick Start
import LLMBiprism from 'llm-biprism';
// Convert OpenAI Chat Completions to OpenAI Responses format
const converter = LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');
const result = converter.convert({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Hello!' }
]
});
if (result.success) {
console.log(result.response);
console.log(result.warnings); // Check for lossy conversions
} else {
console.error(result.error);
}Usage
Basic Conversion
import LLMBiprism from 'llm-biprism';
// Create a reusable converter
const converter = LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');
// Use it multiple times
const result1 = converter.convert(request1);
const result2 = converter.convert(request2);One-Shot Conversion
// Convert directly without storing the converter
const result = LLMBiprism.request()
.from('openai-responses')
.to('openai-chat-completions')
.convert(data);Type-Safe Converters
import { type RequestConverter } from 'llm-biprism';
// Explicit type annotation
const converter: RequestConverter<'openai-chat-completions', 'openai-responses'> =
LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');Handling Results
const result = converter.convert(data);
if (result.success) {
// Success path
const converted = result.response;
const warnings = result.warnings; // Check for lossy conversions
if (warnings.length > 0) {
console.warn('Some features were not fully converted:', warnings);
}
} else {
// Error path
console.error('Conversion failed:', result.error);
console.log('Warnings:', result.warnings);
}Supported Formats
Request Formats
| Format | Description |
|--------|-------------|
| openai-chat-completions | OpenAI Chat Completions API request format |
| openai-responses | OpenAI Responses API request format |
Response Formats
| Format | Description |
|--------|-------------|
| openai-chat-completions | OpenAI Chat Completions API response format |
| openai-responses | OpenAI Responses API response format |
Coming Soon
- Google Vertex AI
- Anthropic Claude
- Streaming response handling
- And more!
Architecture
LLM Biprism uses an internal intermediate format to ensure consistent conversions between different providers. All conversions are validated and transformed through this intermediate layer, ensuring:
- Consistent behavior across all format pairs
- Easy addition of new formats
- Single source of truth for validation
API Reference
LLMBiprism.request()
Start building a request converter.
LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses')
.convert(data);LLMBiprism.response()
Start building a response converter.
LLMBiprism.response()
.from('openai-chat-completions')
.to('openai-responses')
.convert(responseData);Converter Methods
convert(data)
Converts data from source format to target format.
Returns: ConvertorResponse<T>
{
success: true,
response: T,
warnings: string[]
} | {
success: false,
error: string,
warnings: string[]
}from
Getter property that returns the source format.
const converter = LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');
console.log(converter.from); // 'openai-chat-completions'to
Getter property that returns the target format.
console.log(converter.to); // 'openai-responses'type
Getter property that returns 'request' or 'response'.
console.log(converter.type); // 'request'Examples
For complete, runnable examples, see the examples directory.
Run the examples:
export OPENAI_API_KEY=your_key_here
# Chat Completions → Responses API
npm run example:chat-to-responses
# Responses API → Chat Completions
npm run example:responses-to-chatConverting Between OpenAI Formats
const converter = LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');
const chatRequest = {
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are helpful' },
{ role: 'user', content: 'Hello!' }
],
max_completion_tokens: 100
};
const result = converter.convert(chatRequest);
// result.response is now in OpenAI Responses formatBidirectional Conversion
// Convert from Chat Completions to Responses
const toResponses = LLMBiprism.request()
.from('openai-chat-completions')
.to('openai-responses');
const responsesFormat = toResponses.convert(chatRequest);
// Convert back from Responses to Chat Completions
const toChat = LLMBiprism.request()
.from('openai-responses')
.to('openai-chat-completions');
const chatFormat = toChat.convert(responsesFormat.response);Converting Responses
// Convert OpenAI Chat Completion responses
const converter = LLMBiprism.response()
.from('openai-chat-completions')
.to('openai-responses');
const chatCompletionResponse = {
id: 'chatcmpl-123',
object: 'chat.completion',
created: 1677652288,
model: 'gpt-4',
choices: [{
index: 0,
message: {
role: 'assistant',
content: 'Hello! How can I help you?',
refusal: null
},
finish_reason: 'stop',
logprobs: null
}],
usage: {
prompt_tokens: 10,
completion_tokens: 20,
total_tokens: 30
}
};
const result = converter.convert(chatCompletionResponse);
// result.response is now in OpenAI Responses formatHandling Warnings
const result = converter.convert(data);
if (result.success && result.warnings.length > 0) {
result.warnings.forEach(warning => {
console.warn(`Warning: ${warning}`);
});
// Decide whether to proceed or handle warnings
if (result.warnings.includes('STREAM_UNDEFINED')) {
console.log('Stream parameter was missing, defaulting to false');
}
}Development
Running Tests
npm test # Run tests in watch mode
npm run test:watch # Same as above
npm test -- --run # Run tests once (CI mode)
npm run coverage # Run with coverage reportLinting
npm run lint # Check code
npm run lint:fix # Auto-fix issuesBuilding
npm run build # TypeScript compilation + Vite buildDesign Principles
Validation
- Strict validation of input data
- Verify all required fields exist
- Check types carefully
- Return detailed errors
Warning System
- Warnings indicate lossy conversions
- Always returned, even on success
- Help users understand limitations
- Example:
STREAM_UNDEFINEDwhen stream parameter is missing
Contributing
Contributions welcome! Please:
- Follow the existing code style (tabs, single quotes)
- Add tests for new features
- Update documentation
- Run
npm run lint:fixbefore committing
License
Apache License 2.0
Roadmap
- [ ] Google Vertex AI support
- [ ] Anthropic Claude support
- [ ] Tool/function calling support
- [ ] Streaming response handling
- [ ] Configuration options API
