@llmvin/easyllm
v0.1.1
Published
A unified SDK for OpenAI-compatible LLM APIs including llm.vin and others
Maintainers
Readme
EasyLLM
A unified TypeScript/JavaScript SDK for OpenAI-compatible LLM APIs, including llm.vin and OpenAI.
Features
- 🚀 Universal API: Works with llm.vin, OpenAI, and any OpenAI-compatible API
- 📝 TypeScript Support: Full type safety and IntelliSense
- 🔄 Automatic Retries: Built-in retry logic for robust API calls
- 🎯 OpenAI Compatible: Drop-in replacement for OpenAI SDK syntax
- 🛡️ Error Handling: Comprehensive error handling and validation
- 🔍 Content Moderation: Built-in support for text and image moderation
- 🌐 Web Search: Built-in web search capabilities for real-time information
- ⚡ Streaming Support: Real-time streaming responses with function calling
Installation
npm install @llmvin/easyllmQuick Start
import EasyLLM from '@llmvin/easyllm';
const client = new EasyLLM({
apiKey: 'your-api-key'
});
const response = await client.chat.completions.create({
model: 'llama4-scout',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
});
console.log(response.choices[0].message.content);Web Search
EasyLLM includes built-in web search capabilities that allow AI models to access real-time information from the web.
Note: Web search is currently supported by the llama4-maverick model on llm.vin.
Basic Web Search
import EasyLLM from '@llmvin/easyllm';
const client = new EasyLLM({
apiKey: 'your-api-key',
webSearch: {
enabled: true,
maxResults: 5,
includeContent: true
}
});
// Automatic web search when enabled globally
const response = await client.chat.completions.createWithWebSearch({
model: 'llama4-maverick',
messages: [
{ role: 'user', content: 'What are the latest developments in AI?' }
]
});
// Or enable web search for specific requests
const response2 = await client.chat.completions.create({
model: 'llama4-maverick',
messages: [
{ role: 'user', content: 'What is the current weather in Tokyo?' }
],
webSearch: true // Enable for this request only
});Streaming with Web Search
const stream = await client.chat.completions.streamWithWebSearch({
model: 'llama4-maverick',
messages: [
{ role: 'user', content: 'Tell me about recent space missions' }
]
});
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}Dynamic Web Search Control
// Enable/disable web search at runtime
client.setWebSearchEnabled(true);
console.log(client.isWebSearchEnabled()); // true
client.setWebSearchEnabled(false);
console.log(client.isWebSearchEnabled()); // falseSupported APIs
- llm.vin - Access to multiple AI models (default)
- OpenAI - Official OpenAI API
- Custom - Any OpenAI-compatible endpoint
Documentation
⚠️ AI Development Notice
This project was developed with significant AI assistance:
- 🤖 Code Review: Source code was read, analyzed, and improved by AI
- 📝 Documentation: All documentation (README, examples, API docs) was written by AI
- 💬 Commit Messages: Git commit messages were generated by AI
- 🧪 Tests: Test suites were designed and implemented with AI assistance
- 🏗️ Architecture: Project structure and design patterns suggested by AI
Please be aware:
- While the code has been tested, AI-generated code should be thoroughly reviewed before production use
- Documentation may contain inaccuracies or assumptions that need verification
- Always validate functionality against your specific use cases
- Consider this a starting point that may require human review and refinement
This notice is provided for transparency about the development process.
License
MIT © llm.vin
