@ekipnico/ai-mesh
v1.2.0
Published
Simple, DI-native LLM service for the ekipnico ecosystem
Readme
ai-mesh
Thin wrapper around Vercel AI SDK with DI support. Use any provider the SDK supports.
Install
npm install @ekipnico/ai-meshQuick Start
import { Mesh } from 'mesh-ioc';
import { LlmService, LlmProvider, VercelLlmProvider, ModelRegistry, UsageTracker } from '@ekipnico/ai-mesh';
const mesh = new Mesh('MyApp');
mesh.service(LlmService);
mesh.service(ModelRegistry);
mesh.service(UsageTracker);
mesh.service(LlmProvider, VercelLlmProvider);
const llm = mesh.resolve(LlmService);
const response = await llm.generate({
model: 'gpt-4o',
prompt: 'Hello!',
});Using Any Provider
Pass any Vercel AI SDK model directly:
// Groq
import { createGroq } from '@ai-sdk/groq';
const groq = createGroq({ apiKey: process.env.GROQ_API_KEY });
await llm.generate({
model: groq('llama-3.1-70b-versatile'),
prompt: 'Hello!',
});
// Mistral
import { createMistral } from '@ai-sdk/mistral';
const mistral = createMistral({ apiKey: process.env.MISTRAL_API_KEY });
await llm.generate({
model: mistral('mistral-large-latest'),
prompt: 'Hello!',
});
// DeepSeek
import { createDeepSeek } from '@ai-sdk/deepseek';
const deepseek = createDeepSeek({ apiKey: process.env.DEEPSEEK_API_KEY });
await llm.generate({
model: deepseek('deepseek-chat'),
prompt: 'Hello!',
});
// Any other @ai-sdk/* provider works the same wayBuilt-in Providers
For OpenAI, Anthropic, and Google, just use string IDs:
# Set API keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...await llm.generate({ model: 'gpt-4o', prompt: '...' });
await llm.generate({ model: 'claude-sonnet', prompt: '...' });
await llm.generate({ model: 'gemini-flash', prompt: '...' });New models auto-detected by name pattern:
await llm.generate({ model: 'gpt-5', prompt: '...' }); // → openai
await llm.generate({ model: 'claude-opus-4.5', prompt: '...' }); // → anthropic
await llm.generate({ model: 'gemini-3-flash', prompt: '...' }); // → googleAPI
generate(params)
const response = await llm.generate({
model: 'gpt-4o', // string ID or LanguageModel
context: 'You are helpful.', // system prompt
prompt: 'Hello', // required
messages: [], // conversation history
images: [], // for multimodal
maxTokens: 1000,
temperature: 0.7,
});
// → { content, usage, finishReason }generateObject(params)
const person = await llm.generateObject({
model: 'gpt-4o',
prompt: 'Extract: John is 30',
outputSchema: PersonSchema,
});
// → parsed objectstream(params)
for await (const chunk of llm.stream({ prompt: 'Tell a story' })) {
if (chunk.type === 'text') process.stdout.write(chunk.content);
}Registered Models
| ID | Provider | Actual Model |
|----|----------|--------------|
| gpt-4o | openai | gpt-4o |
| gpt-4o-mini | openai | gpt-4o-mini |
| gpt-4.1 | openai | gpt-4.1 |
| gpt-4.1-mini | openai | gpt-4.1-mini |
| gpt-4.1-nano | openai | gpt-4.1-nano |
| o3 | openai | o3 |
| o4-mini | openai | o4-mini |
| claude-sonnet | anthropic | claude-sonnet-4-20250514 |
| claude-sonnet-4.5 | anthropic | claude-sonnet-4-5-20250929 |
| claude-opus | anthropic | claude-opus-4-20250514 |
| claude-opus-4.5 | anthropic | claude-opus-4-5-20251124 |
| claude-haiku | anthropic | claude-haiku-4-5-20251215 |
| gemini-flash | google | gemini-2.5-flash |
| gemini-flash-lite | google | gemini-2.5-flash-lite |
| gemini-pro | google | gemini-2.5-pro |
| gemini-2.0-flash | google | gemini-2.0-flash |
Usage Tracking
const stats = llm.getUsage();
console.log(stats.tokens); // { input, output, total }
console.log(stats.cost); // estimated USD (registered models only)
console.log(stats.requests);
llm.resetUsage();DI Pattern
export class MyAgent {
@dep() private llm!: LlmService;
async run() {
return await this.llm.generate({ prompt: 'Hello' });
}
}