@nicocac/smart-array
v1.0.0
Published
A data structure that uses LLMs to maintain semantic dynamic ordering of arrays
Maintainers
Readme
Smart-Array
A TypeScript/JavaScript data structure that uses LLMs to maintain semantic dynamic ordering of arrays. Unlike traditional .sort(), this library understands sorting rules expressed in natural language.
🚀 Features
- Semantic Sorting: Define sorting rules in plain English
- Multi-Provider Support: Works with OpenAI, Anthropic, Google Gemini, Mistral, Groq, and Ollama
- Optimized for Cost: Semantic Binary Search reduces API calls to O(log n)
- Local Models: Run 100% offline with Ollama - no API costs
- Type-Safe: Full TypeScript support with generics
- Resilient: Graceful error handling with fallback mechanisms
- Cached: Memoization of comparisons to avoid redundant API calls
📦 Installation
pnpm add smart-array
# Install your preferred AI provider SDK (pick one or more)
pnpm add openai # For OpenAI (GPT-4o, GPT-3.5)
pnpm add @anthropic-ai/sdk # For Anthropic (Claude 3.5)
pnpm add @google/generative-ai # For Google (Gemini 1.5, 2.0)
pnpm add @mistralai/mistralai # For Mistral
pnpm add groq-sdk # For Groq (Llama 3.3, Mixtral)
# Ollama requires no SDK - just install Ollama locally🎯 Quick Start
import { SmartArray } from 'smart-array';
// Create an array sorted by task urgency
const tasks = new SmartArray({
provider: {
provider: 'openai',
model: 'gpt-4o',
// apiKey is read from OPENAI_API_KEY env var
},
rules: 'Sort by urgency: critical bugs first, then features, then documentation'
});
// Items are automatically sorted on insertion
await tasks.push('Update README with examples');
await tasks.push('Fix authentication bypass vulnerability');
await tasks.push('Add dark mode feature');
console.log(tasks.toArray());
// ['Fix authentication bypass vulnerability', 'Add dark mode feature', 'Update README with examples']🔌 Supported Providers
| Provider | Models | API Key Env Var | Notes |
|----------|--------|-----------------|-------|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo | OPENAI_API_KEY | Best quality |
| Anthropic | claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus | ANTHROPIC_API_KEY | Great reasoning |
| Google Gemini | gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash | GOOGLE_API_KEY | Good value |
| Mistral | mistral-large, mistral-small, open-mistral-nemo | MISTRAL_API_KEY | European provider |
| Groq | llama-3.3-70b, llama-3.1-8b, mixtral-8x7b | GROQ_API_KEY | Ultra-fast |
| Ollama | llama3, mistral, mixtral, codellama, phi3 | None (local) | 100% free & private |
Provider Examples
// OpenAI
const arr = new SmartArray({
provider: { provider: 'openai', model: 'gpt-4o-mini' },
rules: 'Sort by importance'
});
// Anthropic
const arr = new SmartArray({
provider: { provider: 'anthropic', model: 'claude-3-5-sonnet-latest' },
rules: 'Sort by importance'
});
// Google Gemini
const arr = new SmartArray({
provider: { provider: 'gemini', model: 'gemini-1.5-flash' },
rules: 'Sort by importance'
});
// Mistral
const arr = new SmartArray({
provider: { provider: 'mistral', model: 'mistral-large-latest' },
rules: 'Sort by importance'
});
// Groq (ultra-fast inference)
const arr = new SmartArray({
provider: { provider: 'groq', model: 'llama-3.3-70b-versatile' },
rules: 'Sort by importance'
});
// Ollama (local, free, private)
const arr = new SmartArray({
provider: { provider: 'ollama', model: 'llama3' },
rules: 'Sort by importance'
});🛠️ Configuration
Basic Configuration
interface SmartArrayConfig<T> {
// AI Provider settings
provider: {
provider: 'openai' | 'anthropic' | 'gemini' | 'mistral' | 'groq' | 'ollama';
model: string;
apiKey?: string; // Optional - uses env vars by default
baseUrl?: string; // Optional - for custom endpoints
};
// Natural language sorting rules
rules: string;
// Insertion strategy (optional)
strategy?: 'AUTO' | 'FULL_REORDER' | 'SEMANTIC_BINARY_SEARCH';
// Cache settings (optional)
cache?: {
enabled: boolean;
maxSize?: number;
ttl?: number; // milliseconds
persistent?: boolean;
};
// Custom serialization for complex objects (optional)
serialize?: (item: T) => string;
// Enable debug logging (optional)
debug?: boolean;
}Using with Complex Objects
interface Task {
id: number;
title: string;
priority: 'low' | 'medium' | 'high' | 'critical';
deadline: Date;
}
const taskList = new SmartArray<Task>({
provider: {
provider: 'groq', // Fast and affordable
model: 'llama-3.3-70b-versatile'
},
rules: `
Sort tasks by these criteria in order:
1. Critical priority tasks first
2. Then by deadline (earlier deadlines first)
3. Finally alphabetically by title
`,
serialize: (task) => JSON.stringify({
priority: task.priority,
deadline: task.deadline.toISOString(),
title: task.title
})
});📊 Insertion Strategies
AUTO (Default)
Automatically selects the best strategy based on array size:
- Uses
FULL_REORDERfor arrays < 5 items - Uses
SEMANTIC_BINARY_SEARCHfor larger arrays
FULL_REORDER
Sends the entire array to the LLM for reordering. Best for:
- Small arrays (< 10 items)
- Complex interdependent sorting rules
- When you need the most accurate ordering
SEMANTIC_BINARY_SEARCH
Uses O(log n) LLM comparisons to find insertion position. Best for:
- Large arrays
- Cost-sensitive applications
- Simpler comparison-based rules
import { SmartArray, InsertionStrategy } from 'smart-array';
// Force specific strategy
const arr = new SmartArray({
provider: { provider: 'ollama', model: 'llama3' },
rules: 'Sort alphabetically',
strategy: InsertionStrategy.SEMANTIC_BINARY_SEARCH
});🔌 API Reference
Core Methods
// Insert single item
await arr.push(item): Promise<number> // Returns insertion index
// Insert multiple items
await arr.pushAll(items): Promise<number[]>
// Get all items
arr.toArray(): T[]
// Access items
arr.at(index): T | undefined
arr.first: T | undefined
arr.last: T | undefined
arr.length: number
arr.isEmpty: boolean
// Remove items
arr.removeAt(index): T | undefined
arr.remove(item): boolean
arr.clear(): void
// Force reorder
await arr.reorder(): Promise<void>Array-like Methods
arr.filter(predicate): T[]
arr.find(predicate): T | undefined
arr.findIndex(predicate): number
arr.some(predicate): boolean
arr.every(predicate): boolean
arr.forEach(callback): void
arr.map(callback): U[]
arr.reduce(callback, initialValue): U
arr.includes(item): boolean
arr.slice(start?, end?): T[]
// Iteration
for (const item of arr) { ... }
[...arr]State & Metadata
// Check array state
arr.state: 'verified' | 'unverified' | 'error'
arr.isVerified: boolean
// Get detailed metadata
arr.getMetadata(): {
state: ArrayState;
lastSortedAt?: Date;
itemCount: number;
totalLlmCalls: number;
cacheHits: number;
lastError?: string;
}Events
arr.on('item-inserted', (event) => {
console.log(`Inserted at index ${event.index}`);
});
arr.on('sort-error', (event) => {
console.error(`Sort failed: ${event.error.message}`);
});
arr.on('cache-hit', () => console.log('Cache hit!'));
arr.on('state-change', (event) => {
console.log(`State: ${event.previousState} -> ${event.newState}`);
});🔐 Security Best Practices
Never hardcode API keys! Use environment variables:
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
GEMINI_API_KEY=...
MISTRAL_API_KEY=...
GROQ_API_KEY=gsk_...
OLLAMA_HOST=http://localhost:11434 # Optional, this is the default// The library automatically reads from env vars
const arr = new SmartArray({
provider: {
provider: 'openai',
model: 'gpt-4o'
// apiKey not needed - reads from OPENAI_API_KEY
},
rules: 'Sort by importance'
});🏠 Using Ollama (Local Models)
Ollama lets you run models locally with zero API costs:
# 1. Install Ollama (https://ollama.ai)
# 2. Pull a model
ollama pull llama3
# 3. Start Ollama server (usually auto-starts)
ollama serve// Use in your code
const arr = new SmartArray({
provider: { provider: 'ollama', model: 'llama3' },
rules: 'Sort by relevance'
});
// Check Ollama health (optional)
import { OllamaProvider } from 'smart-array';
const provider = new OllamaProvider({ model: 'llama3' });
const health = await provider.checkHealth();
console.log(health); // { running: true, modelAvailable: true }🧪 Running Examples
The project includes example scripts to test different strategies:
# Setup
pnpm install
pnpm build
# Test with FULL_REORDER strategy (default for small arrays)
node examples/test-gemini.mjs
# Test with SEMANTIC_BINARY_SEARCH strategy (O(log n) comparisons)
node examples/test-binary-search.mjsExample: FULL_REORDER (test-gemini.mjs)
import 'dotenv/config';
import { SmartArray } from 'smart-array';
const tasks = new SmartArray({
provider: { provider: 'gemini', model: 'gemini-2.0-flash' },
rules: 'Sort by urgency: critical security issues first, then bugs, then features',
debug: true
});
await tasks.push('Update README documentation');
await tasks.push('Fix SQL injection vulnerability (CRITICAL)');
await tasks.push('Add dark mode feature');
console.log(tasks.toArray());Example: SEMANTIC_BINARY_SEARCH (test-binary-search.mjs)
import 'dotenv/config';
import { SmartArray, InsertionStrategy } from 'smart-array';
const tasks = new SmartArray({
provider: { provider: 'gemini', model: 'gemini-2.0-flash' },
rules: 'Sort tasks by priority: CRITICAL > HIGH > MEDIUM > LOW',
strategy: InsertionStrategy.SEMANTIC_BINARY_SEARCH,
debug: true
});
// Binary search uses O(log n) LLM calls per insertion
await tasks.push('Add unit tests (MEDIUM)');
await tasks.push('Fix SQL injection (CRITICAL)');
await tasks.push('Memory leak fix (HIGH)');
// ... more items
const meta = tasks.getMetadata();
console.log(`Total LLM calls: ${meta.totalLlmCalls}`);
console.log(`Cache hits: ${meta.cacheHits}`);🧪 Testing
# Run all unit tests
pnpm test
# Watch mode
pnpm test:watch
# Coverage report
pnpm test:coverage📁 Project Structure
smart-array/
├── src/
│ ├── core/ # Main classes (SmartArray, Engine, Cache)
│ ├── providers/ # AI provider implementations
│ │ ├── OpenAIProvider.ts
│ │ ├── AnthropicProvider.ts
│ │ ├── GeminiProvider.ts
│ │ ├── MistralProvider.ts
│ │ ├── GroqProvider.ts
│ │ └── OllamaProvider.ts
│ ├── prompts/ # LLM prompt templates
│ ├── types/ # TypeScript type definitions
│ └── index.ts # Public API exports
├── examples/ # Example scripts
│ ├── test-gemini.mjs
│ └── test-binary-search.mjs
├── tests/ # Jest test suites
└── package.json🤝 Contributing
Contributions are welcome! Please read our contributing guidelines and submit pull requests.
📄 License
MIT © 2024
Built with ❤️ using TypeScript and LLMs
