@bernierllc/ai-provider-anthropic
v1.0.0
Published
Anthropic Claude API adapter implementing the unified AI provider interface
Downloads
16
Readme
@bernierllc/ai-provider-anthropic
Anthropic Claude API adapter implementing the unified AI provider interface from @bernierllc/ai-provider-core.
Features
- All Claude 3 Models: Full support for Opus, Sonnet, and Haiku
- Streaming: Async generator-based streaming completions
- Vision Analysis: Image analysis with all Claude 3 models
- Extended Context: Support for 200K token context windows
- Cost Estimation: Accurate cost calculation with model-specific pricing
- TypeScript: Full type safety with strict mode
- Error Handling: Comprehensive error handling with retry support
Installation
npm install @bernierllc/ai-provider-anthropicBasic Usage
import { AnthropicProvider } from '@bernierllc/ai-provider-anthropic';
const provider = new AnthropicProvider({
providerName: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
defaultModel: 'claude-3-opus-20240229'
});
// Generate completion
const response = await provider.complete({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain TypeScript generics in simple terms.' }
],
maxTokens: 500,
temperature: 0.7
});
if (response.success) {
console.log(response.content);
console.log(`Tokens used: ${response.usage?.totalTokens}`);
}API Reference
Constructor
new AnthropicProvider(config: AnthropicProviderConfig)Config Options:
providerName: Must be'anthropic'apiKey: Anthropic API key (required)defaultModel: Default model to use (optional, defaults toclaude-3-opus-20240229)baseURL: Custom API base URL (optional)timeout: Request timeout in milliseconds (optional, defaults to 60000)maxRetries: Maximum number of retries (optional, defaults to 3)
Methods
complete(request: CompletionRequest): Promise<CompletionResponse>
Generate a text completion.
const response = await provider.complete({
messages: [
{ role: 'user', content: 'Hello!' }
],
maxTokens: 100,
temperature: 0.7
});streamComplete(request: CompletionRequest): AsyncGenerator<StreamChunk>
Generate a streaming text completion.
for await (const chunk of provider.streamComplete({
messages: [
{ role: 'user', content: 'Write a short poem about coding' }
]
})) {
process.stdout.write(chunk.delta);
if (chunk.finishReason) {
console.log(`\n\nFinished: ${chunk.finishReason}`);
}
}analyzeImage(imageData, prompt, model?, maxTokens?, temperature?): Promise<CompletionResponse>
Analyze an image using Claude's vision capabilities.
const imageBuffer = fs.readFileSync('diagram.png');
const analysis = await provider.analyzeImage(
imageBuffer,
'Describe this architecture diagram in detail',
'claude-3-opus-20240229'
);
if (analysis.success) {
console.log(analysis.content);
}extendedContextCompletion(request): Promise<CompletionResponse>
Process requests with extended context (200K tokens).
const longDocument = fs.readFileSync('very-long-document.txt', 'utf-8');
const response = await provider.extendedContextCompletion({
messages: [
{ role: 'user', content: `Summarize this document:\n\n${longDocument}` }
],
enableExtendedContext: true
});estimateCost(request: CompletionRequest): CostEstimate
Estimate the cost of a request before sending it.
const cost = provider.estimateCost({
messages: [{ role: 'user', content: 'Hello world!' }],
model: 'claude-3-opus-20240229',
maxTokens: 100
});
console.log(`Estimated cost: $${cost.estimatedCostUSD.toFixed(4)}`);getAvailableModels(): Promise<ModelInfo[]>
Get information about all available Claude models.
const models = await provider.getAvailableModels();
models.forEach(model => {
console.log(`${model.name}: ${model.contextWindow} tokens`);
});checkHealth(): Promise<HealthStatus>
Check if the Anthropic API is available.
const health = await provider.checkHealth();
if (health.status === 'healthy') {
console.log(`API is healthy (latency: ${health.latency}ms)`);
}Available Models
| Model | ID | Context | Output | Cost (Input/Output per 1M tokens) |
|-------|-----|---------|--------|-----------------------------------|
| Claude 3 Opus | claude-3-opus-20240229 | 200K | 4096 | $15 / $75 |
| Claude 3 Sonnet | claude-3-sonnet-20240229 | 200K | 4096 | $3 / $15 |
| Claude 3 Haiku | claude-3-haiku-20240307 | 200K | 4096 | $0.25 / $1.25 |
All models support:
- Text completion
- Streaming
- Vision analysis
- Extended context (200K tokens)
Error Handling
The provider returns structured error responses:
const response = await provider.complete({
messages: [{ role: 'user', content: 'Test' }]
});
if (!response.success) {
console.error('Error:', response.error);
}Limitations
Embeddings
Anthropic does not provide an embeddings API. Use @bernierllc/ai-provider-openai for embeddings:
const embeddingResponse = await provider.generateEmbeddings({
input: 'test text'
});
// Returns: { success: false, error: 'Anthropic does not provide an embeddings API...' }Content Moderation
Claude has built-in safety features (Constitutional AI), so no separate moderation API is needed:
const moderation = await provider.moderate('content');
// Always returns: { success: true, flagged: false, ... }Integration Status
- Logger: ✅ Integrated - Uses
@bernierllc/loggerfor error logging - Docs-Suite: ✅ Ready - Full TypeDoc/JSDoc documentation
- NeverHub: ⚠️ Optional - Can integrate for monitoring and metrics
Advanced Usage
Model Comparison
const models = await provider.getAvailableModels();
models.forEach(model => {
console.log(`${model.name}:`);
console.log(` Context: ${model.contextWindow.toLocaleString()} tokens`);
console.log(` Capabilities: ${model.capabilities.join(', ')}`);
if (model.pricing) {
console.log(` Input: $${model.pricing.inputPricePerToken * 1000000}/M tokens`);
console.log(` Output: $${model.pricing.outputPricePerToken * 1000000}/M tokens`);
}
});Complex Reasoning Task
// Claude excels at complex reasoning and analysis
const response = await provider.complete({
messages: [
{
role: 'user',
content: `Analyze this code architecture and suggest improvements:
[Large codebase content...]
Consider: scalability, maintainability, security, performance`
}
],
model: 'claude-3-opus-20240229',
maxTokens: 4096,
temperature: 0.3 // Lower temperature for analytical tasks
});Vision with Multiple Images
// Analyze multiple images in sequence
const images = ['image1.png', 'image2.png', 'image3.png'];
for (const imagePath of images) {
const imageBuffer = fs.readFileSync(imagePath);
const analysis = await provider.analyzeImage(
imageBuffer,
'What do you see in this image?'
);
console.log(`${imagePath}: ${analysis.content}`);
}TypeScript Types
import type {
AnthropicProviderConfig,
AnthropicVisionRequest,
AnthropicExtendedContextRequest
} from '@bernierllc/ai-provider-anthropic';
import type {
CompletionRequest,
CompletionResponse,
StreamChunk,
ModelInfo,
HealthStatus,
CostEstimate
} from '@bernierllc/ai-provider-core';License
Copyright (c) 2025 Bernier LLC. All rights reserved.
See Also
- @bernierllc/ai-provider-core - Abstract provider interface
- @bernierllc/ai-provider-openai - OpenAI provider implementation
- @bernierllc/retry-policy - Retry logic used by this package
- @bernierllc/logger - Logging used by this package
