euri
v0.1.1
Published
Node SDK for interacting with the Euri Chat Completions API
Downloads
126
Maintainers
Readme
Euri SDK
Node.js SDK for interacting with the Euri Chat Completions API.
Installation
npm install euriUsage
Basic Usage
import { EuriClient } from 'euri';
const client = new EuriClient({
apiKey: 'YOUR_API_TOKEN'
});
async function main() {
try {
// Quick completion with a simple prompt
const response = await client.complete('What is AGI?', 'gpt-4.1-nano');
console.log('Assistant:', response);
} catch (error) {
console.error('Error:', error);
}
}
main();Chat Completion
import { EuriClient, CoreMessage } from 'euri';
const client = new EuriClient({
apiKey: 'YOUR_API_TOKEN'
});
async function main() {
try {
const messages: CoreMessage[] = [
{
role: 'user',
content: 'What is AGI?'
},
{
role: 'assistant',
content: [
{
type: 'text',
text: 'AGI stands for Artificial General Intelligence. It refers to....'
}
]
},
{
role: 'user',
content: 'I didn\'t get it.'
}
];
const response = await client.createChatCompletion({
model: 'gpt-4.1-nano',
messages: messages,
max_tokens: 1000,
temperature: 0.7
});
console.log('Assistant:', response.choices[0].message.content);
} catch (error) {
console.error('Error:', error);
}
}
main();Available Models
The Euri API supports the following models:
llama-3.3-70b-versatile(default)gemini-2.0-flash-001mistral-saba-24bllama-4-scout-17b-16e-instructllama-4-maverick-17b-128e-instructgemini-2.5-pro-expgpt-4.1-nanogpt-4.1-minideepseek-r1-distill-llama-70b(with reasoning extraction)qwen-qwq-32b(with reasoning extraction)
API Reference
EuriClient(config)
Creates a new client instance.
config.apiKey(string, required): Your Euri API key.config.baseURL(string, optional): Override the default API base URL.
client.createChatCompletion(request)
Creates a chat completion based on the provided messages and model.
request.model(string): The model to userequest.messages(array): Array of message objectsrequest.max_tokens(number, optional): Maximum tokens to generate (default: 1000)request.temperature(number, optional): Sampling temperature (default: 0.7)
Returns a promise resolving to the chat completion response.
client.complete(prompt, model, options)
Convenience method for simple completions with a single prompt.
prompt(string): The user promptmodel(string, optional): The model to use (default: 'llama-3.3-70b-versatile')options(object, optional):max_tokens(number): Maximum tokens to generatetemperature(number): Sampling temperature
Returns a promise resolving to the text response as a string.
Development
- Clone the repository
- Install dependencies:
npm install - Build the package:
npm run build
