@neureus/sdk
v0.2.1
Published
Neureus Platform SDK - AI-native, edge-first application platform
Maintainers
Readme
@neureus/sdk
Official TypeScript/JavaScript SDK for the Neureus AI Platform
Overview
The Neureus SDK provides a unified interface to interact with the Neureus AI Platform's services:
- AI Gateway: Multi-provider LLM routing with automatic fallback
- Vector Database: Lightning-fast vector search powered by HNSW
- RAG Pipeline: Complete document Q&A with retrieval-augmented generation
Installation
npm install @neureus/sdk
# or
pnpm add @neureus/sdk
# or
yarn add @neureus/sdkQuick Start
Unified Client
import { NeureusClient } from '@neureus/sdk';
const neureus = new NeureusClient({
apiKey: process.env.NEUREUS_API_KEY
});
// AI Gateway - Chat with any LLM
const response = await neureus.ai.chat.create([
{ role: 'user', content: 'What is Neureus?' }
]);
// Vector Database - Semantic search
const results = await neureus.vector.search({
vector: embedding,
topK: 5,
minSimilarity: 0.7
});
// RAG Pipeline - Document Q&A
const answer = await neureus.rag.query('knowledge-base', {
query: 'How do I deploy my application?'
});Individual Clients
You can also import and use each client independently:
import { AIClient, VectorClient, RAGClient } from '@neureus/sdk';
const ai = new AIClient({ apiKey: process.env.NEUREUS_API_KEY });
const vector = new VectorClient({ apiKey: process.env.NEUREUS_API_KEY });
const rag = new RAGClient({ apiKey: process.env.NEUREUS_API_KEY });AI Gateway
Chat Completions
import { AIClient } from '@neureus/sdk/ai';
const ai = new AIClient({
apiKey: process.env.NEUREUS_API_KEY
});
// Non-streaming
const response = await ai.chat.create([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing simply.' }
], {
model: 'gpt-4',
temperature: 0.7,
maxTokens: 500
});
console.log(response.choices[0].message.content);
console.log(`Tokens used: ${response.usage.totalTokens}`);
console.log(`Cost: $${response.cost?.total.toFixed(4)}`);Streaming Completions
// Streaming
const stream = await ai.chat.stream([
{ role: 'user', content: 'Write a short story about AI.' }
], {
model: 'gpt-4'
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}Multi-Provider Support
// Automatic fallback across providers
const response = await ai.chat.create(messages, {
model: 'gpt-4',
fallback: ['claude-3-sonnet', 'gemini-pro']
});
// OpenAI, Anthropic, Google, Cloudflare, and AWS Bedrock supportedCaching & Cost Optimization
// Automatic caching for identical requests
const response = await ai.chat.create(messages, {
cache: true // Default: true
});
if (response.cached) {
console.log(`Cache hit! Saved ${response.cost?.total.toFixed(4)} USD`);
}Vector Database
Index Management
import { VectorClient } from '@neureus/sdk/vector';
const vectors = new VectorClient({
apiKey: process.env.NEUREUS_API_KEY
});
// Create an index
await vectors.indices.create({
name: 'product-docs',
dimension: 1536, // OpenAI ada-002
metric: 'cosine',
indexType: 'hnsw'
});
// List all indices
const { indices } = await vectors.indices.list();Vector Operations
// Upsert vectors
await vectors.upsert({
vectors: [
{
id: 'doc-1',
vector: embedding, // [0.1, 0.2, ..., 0.5]
metadata: {
title: 'Getting Started',
section: 'installation',
page: 1
}
}
],
indexName: 'product-docs'
});
// Get a vector by ID
const vector = await vectors.get('doc-1');Similarity Search
// Vector search
const results = await vectors.search({
vector: queryEmbedding,
topK: 10,
minSimilarity: 0.7,
filter: {
section: 'installation'
},
includeMetadata: true
});
for (const result of results.matches) {
console.log(`${result.id}: ${result.score} - ${result.metadata.title}`);
}Hybrid Search
// Combine vector and keyword search
const results = await vectors.hybridSearch({
vector: queryEmbedding,
query: 'installation guide',
topK: 10,
alpha: 0.7 // 70% vector, 30% keyword
});RAG Pipeline
Create a Pipeline
import { RAGClient } from '@neureus/sdk/rag';
const rag = new RAGClient({
apiKey: process.env.NEUREUS_API_KEY
});
// Create a RAG pipeline
await rag.pipelines.create({
name: 'customer-support',
description: 'Customer support knowledge base',
embedding: {
model: 'text-embedding-ada-002',
provider: 'openai',
dimensions: 1536
},
chunking: {
strategy: 'recursive',
size: 512,
overlap: 128
},
generation: {
model: 'gpt-4',
provider: 'openai',
temperature: 0.1,
maxTokens: 1000
}
});Ingest Documents
// From files
await rag.ingest('customer-support', {
source: './docs',
type: 'file',
format: 'markdown',
recursive: true
});
// From URL
await rag.ingest('customer-support', {
source: 'https://example.com/docs',
type: 'url',
format: 'html'
});
// From text
await rag.ingest('customer-support', {
source: 'This is my document content...',
type: 'text',
metadata: { title: 'Product Guide' }
});Query with RAG
// Non-streaming query
const response = await rag.query('customer-support', {
query: 'How do I reset my password?',
topK: 5,
minSimilarity: 0.7,
includeSource: true
});
console.log('Answer:', response.answer);
console.log('Sources:', response.sources);
console.log('Performance:', response.performance);Streaming RAG Responses
// Streaming query
const stream = await rag.queryStream('customer-support', {
query: 'Explain the authentication flow'
});
for await (const chunk of stream) {
if (chunk.type === 'context') {
console.log('Retrieved contexts:', chunk.data);
} else if (chunk.type === 'answer') {
process.stdout.write(chunk.data.content);
} else if (chunk.type === 'complete') {
console.log('\nSources:', chunk.data.sources);
}
}Configuration
Client Options
const neureus = new NeureusClient({
apiKey: 'nru_...',
baseUrl: 'https://api.neureus.ai', // Optional, default shown
timeout: 60000, // ms, default: 60000
retries: 3, // default: 3
userId: 'user-123', // Optional, for usage tracking
teamId: 'team-456', // Optional, for usage tracking
defaultVectorIndex: 'default', // Optional
defaultVectorNamespace: '' // Optional
});Error Handling
import {
AIGatewayError,
RateLimitError,
VectorDBError,
RAGError
} from '@neureus/sdk';
try {
const response = await ai.chat.create(messages);
} catch (error) {
if (error instanceof RateLimitError) {
console.error('Rate limited, retry after:', error.metadata.retryAfter);
} else if (error instanceof AIGatewayError) {
console.error('AI Gateway error:', error.code, error.message);
} else {
console.error('Unexpected error:', error);
}
}TypeScript Support
The SDK is written in TypeScript and includes full type definitions:
import type {
ChatMessage,
ChatCompletionResponse,
VectorEntry,
VectorSearchResponse,
QueryResponse,
RAGConfig
} from '@neureus/sdk';Advanced Usage
Custom HTTP Client
// The SDK uses `ky` internally for HTTP requests
// All clients accept standard ky configuration optionsBatch Operations
// Batch vector upserts
await vectors.upsert({
vectors: [...Array(1000)].map((_, i) => ({
id: `doc-${i}`,
vector: generateEmbedding(),
metadata: { index: i }
}))
});Concurrent Requests
// Parallel AI requests
const [response1, response2, response3] = await Promise.all([
ai.chat.create(messages1),
ai.chat.create(messages2),
ai.chat.create(messages3)
]);Examples
Check out the /examples directory for complete working examples:
- Chat application with streaming
- Document Q&A with RAG
- Semantic search
- Multi-provider fallback
- Cost optimization strategies
API Reference
Full API documentation available at: https://docs.neureus.ai/sdk
Support
- Documentation: https://docs.neureus.ai
- GitHub Issues: https://github.com/Neureus/Neureus/issues
- Discord Community: https://discord.gg/neureus
- Email: [email protected]
License
MIT © Neureus
Note: Requires an API key from https://app.neureus.ai
