@future-explorer/lib
v2.0.2
Published
Shared utilities and clients for Future Explorer projects
Readme
Future Explorer Lib
Shared utilities and clients for Future Explorer projects.
Installation
npm install @future-explorer/libGrokAiClient
AI client for interacting with Grok (xAI) models with structured output support.
Basic Usage
import { GrokAiClient } from '@future-explorer/lib';
const client = new GrokAiClient({
apiKey: 'your-api-key', // or set XAI_API_KEY env var
temperature: 0.1,
maxTokens: 4096,
logger: console, // optional
});
interface PersonInfo {
name: string;
age: number;
}
const response = await client.getGenericStructuredResponse<PersonInfo>({
model: 'grok-2-latest',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Extract data from this text...' },
],
tools: [
{
type: 'function',
name: 'extract_info',
description: 'Extract structured information',
parameters: {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
},
required: ['name', 'age'],
},
},
],
});
if (response) {
console.log(response.args); // { name: '...', age: ... }
console.log(response.functionName); // 'extract_info'
}Constructor Options
apiKey(optional): xAI API key. Falls back toXAI_API_KEYenv vartemperature(optional): Default temperature for requests (default: 0.1)maxTokens(optional): Default max tokens (default: 4096)logger(optional): Logger instance withwarnanderrormethods
UnifiedAiClient
Multi-provider AI client supporting OpenAI, XAI (Grok), and Google Gemini with Zod schema-based structured outputs.
Basic Usage
import { UnifiedAiClient, Provider } from '@future-explorer/lib';
import { z } from 'zod';
// Create client with desired provider
const client = new UnifiedAiClient(Provider.OpenAI);
// or Provider.XAI, Provider.Gemini
// Define a Zod schema for the response
const SentimentSchema = z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1),
summary: z.string(),
});
// Generate structured response
const result = await client.generateStructuredResponse(SentimentSchema, {
prompt: 'I absolutely love this product!',
system: 'You are a sentiment analysis expert.',
});
console.log(result.sentiment); // 'positive'
console.log(result.confidence); // 0.95
console.log(result.summary); // '...'Providers
| Provider | Enum Value | Required Environment Variables |
| ------------- | ----------------- | ---------------------------------------------- |
| OpenAI | Provider.OpenAI | OPENAI_API_KEY, MODEL_OPEN_AI |
| XAI (Grok) | Provider.XAI | XAI_API_KEY, MODEL_XAI |
| Google Gemini | Provider.Gemini | GOOGLE_GENERATIVE_AI_API_KEY, MODEL_GEMINI |
Methods
generateStructuredResponse<T>(schema, params, usageTracker?): Generates a structured response matching the Zod schema. Acceptsprompt,system, plus any additional options. Optionally pass anAiUsageTrackerto record usage.getModel(): Returns the underlying LanguageModel instance
AiUsageTracker
Tracks and accumulates AI usage costs across multiple generateStructuredResponse() calls. Useful for calculating the total cost of processing a single input that requires multiple AI calls.
Basic Usage
import { UnifiedAiClient, AiUsageTracker, Provider } from '@future-explorer/lib';
import { z } from 'zod';
const client = new UnifiedAiClient(Provider.XAI);
const tracker = new AiUsageTracker();
const Schema = z.object({ summary: z.string() });
// Each call automatically records usage in the tracker
await client.generateStructuredResponse(Schema, { prompt: 'First call...' }, tracker);
await client.generateStructuredResponse(Schema, { prompt: 'Second call...' }, tracker);
// Get accumulated results
console.log(tracker.getSummary());
// AI Usage: 2 call(s), $0.001234 estimated
// Tokens - input: 500, output: 200, reasoning: 0, total: 700
// [xai/grok-4-1-fast-reasoning] in=250 out=100 $0.000617 3200ms
// [xai/grok-4-1-fast-reasoning] in=250 out=100 $0.000617 2800ms
console.log(tracker.getTotalCost()); // 0.001234
console.log(tracker.getTotalTokens()); // { input: 500, output: 200, reasoning: 0, total: 700 }
console.log(tracker.getCalls()); // AiCallUsage[]Supported Models
Built-in pricing for cost estimation (easily extensible):
| Provider | Models | | -------- | ------------------------------------------------------------------------------ | | xAI | grok-4-1-fast, grok-4-fast, grok-4, grok-3, grok-3-mini, grok-2 | | OpenAI | gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o3-mini, o4-mini | | Gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite, gemini-2.0-flash |
Model IDs are matched by substring, so grok-4-1-fast-reasoning matches the grok-4-1-fast pricing entry.
Development
Build
npm run buildWatch Mode
npm run watchLint
npm run lint
npm run lint:fixLocal Development
Link the package locally for testing in other projects:
./scripts/link-local.shThen in your project:
npm link @future-explorer/libPublishing
Manual Publish
npm run build
npm publishUsing Script
./scripts/publish.sh [patch|minor|major]Changelog
1.0.13
generateStructuredResponse()now logs per-call usage to console
1.0.12
- Added
AiUsageTrackerfor tracking AI usage costs across multiple calls generateStructuredResponse()now accepts optionalusageTrackerparameter
1.0.11
- Moved
schemaout ofGenerateObjectParamsinto a separate parameter - Renamed
userPrompt/systemMessagetoprompt/system
1.0.10
- Refactored
generateStructuredResponse()to accept a singleGenerateObjectParams<T>object
1.0.7
- Updated peer dependency to zod 4.2.x
1.0.6
- Added UnifiedAiClient with multi-provider support (OpenAI, XAI, Gemini)
License
ISC
