openmodels
v0.3.2
Published
SDK for calling open-source models with a consistent API
Maintainers
Readme
OpenModels v0.4.0
Unified SDK for 20+ AI tasks using HuggingFace Inference Providers
Call open-source AI models with a consistent API. Supports text generation, embeddings, image generation, audio transcription, object detection, and 15 more tasks through HuggingFace's Inference Providers API with automatic provider routing.
✨ What's New in v0.4.0
- 20+ AI tasks supported (up from 6)
- Direct HTTP calls to router.huggingface.co/v1 (removed SDK dependency)
- Provider selection with
model:providersyntax - Full parameter support for all tasks
- OpenAI-compatible API structure
- Better reliability and error handling
See Migration Guide for upgrading from v0.3.x
Installation
npm install openmodelsQuick Start
import { client } from 'openmodels';
const openmodels = client({
apiKey: 'om_your_api_key_here', // Get from tryscout.dev
hfToken: 'hf_...' // Your HuggingFace token
});
// Chat completion
const response = await openmodels.chat({
model: 'openai/gpt-oss-120b',
messages: [
{ role: 'user', content: 'Explain quantum computing' }
],
});
console.log(response.choices[0].message.content);Supported Tasks
Text Tasks
Chat Completion
Conversational AI with LLMs:
const response = await openmodels.chat({
model: 'openai/gpt-oss-120b:cerebras', // Provider selection
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is machine learning?' }
],
temperature: 0.7,
max_tokens: 500,
frequency_penalty: 0.5,
presence_penalty: 0.3,
response_format: { type: 'json_object' }, // Force JSON output
seed: 42, // Reproducible outputs
});Text Generation
Raw text completion:
const response = await openmodels.textGeneration({
model: 'meta-llama/Llama-3.1-8B-Instruct',
inputs: 'Once upon a time',
parameters: {
max_new_tokens: 200,
temperature: 0.8,
top_p: 0.95,
repetition_penalty: 1.2,
},
stream: true, // Enable streaming
});Feature Extraction (Embeddings)
Generate embeddings for semantic search:
const response = await openmodels.featureExtraction({
model: 'intfloat/multilingual-e5-large',
inputs: ['Hello world', 'Another text'],
parameters: {
normalize: true,
truncate: true,
},
});
console.log('Embeddings:', response.embeddings);Question Answering
Answer questions from context:
const response = await openmodels.questionAnswering({
model: 'deepset/roberta-base-squad2',
inputs: {
question: 'What is the capital of France?',
context: 'Paris is the capital and largest city of France.',
},
parameters: {
top_k: 3,
max_answer_len: 50,
},
});Summarization
Summarize long text:
const response = await openmodels.summarization({
model: 'facebook/bart-large-cnn',
inputs: 'Long article text here...',
parameters: {
generate_parameters: {
max_length: 150,
min_length: 30,
},
},
});
console.log('Summary:', response.summary_text);Translation
Translate between languages:
const response = await openmodels.translation({
model: 'Helsinki-NLP/opus-mt-en-de',
inputs: 'Hello, how are you?',
parameters: {
src_lang: 'en',
tgt_lang: 'de',
},
});
console.log('Translation:', response.translation_text);Text Classification
Classify text into categories:
const response = await openmodels.textClassification({
model: 'ProsusAI/finbert',
inputs: 'The stock market is performing well today.',
parameters: {
top_k: 3,
},
});
console.log('Classifications:', response.classifications);Token Classification (NER)
Extract named entities:
const response = await openmodels.tokenClassification({
model: 'dslim/bert-base-NER',
inputs: 'Apple Inc. was founded by Steve Jobs in Cupertino.',
parameters: {
aggregation_strategy: 'simple',
},
});
console.log('Entities:', response.entities);Zero-Shot Classification
Classify with custom labels:
const response = await openmodels.zeroShotClassification({
model: 'facebook/bart-large-mnli',
inputs: 'This is a great product!',
parameters: {
candidate_labels: ['positive', 'negative', 'neutral'],
multi_label: false,
},
});
console.log('Labels:', response.labels);
console.log('Scores:', response.scores);Fill Mask
Predict masked tokens:
const response = await openmodels.fillMask({
model: 'google-bert/bert-base-uncased',
inputs: 'Paris is the [MASK] of France.',
parameters: {
top_k: 5,
},
});
console.log('Predictions:', response.predictions);Table Question Answering
Answer questions about tables:
const response = await openmodels.tableQuestionAnswering({
model: 'google/tapas-base-finetuned-wtq',
inputs: {
query: 'What is the population of France?',
table: {
'Country': ['France', 'Germany', 'Spain'],
'Population': ['67M', '83M', '47M'],
},
},
});
console.log('Answer:', response.answer);Image Tasks
Text to Image
Generate images from text:
const response = await openmodels.textToImage({
model: 'black-forest-labs/FLUX.1-schnell',
inputs: 'A beautiful sunset over mountains',
parameters: {
negative_prompt: 'blurry, low quality',
width: 1024,
height: 1024,
num_inference_steps: 50,
guidance_scale: 7.5,
seed: 42,
},
});
// Save image
const buffer = await response.image.arrayBuffer();
fs.writeFileSync('image.png', Buffer.from(buffer));Image Classification
Classify image content:
const response = await openmodels.imageClassification({
model: 'google/vit-base-patch16-224',
inputs: imageBase64,
parameters: {
top_k: 5,
},
});
console.log('Classifications:', response.classifications);Object Detection
Detect objects in images:
const response = await openmodels.objectDetection({
model: 'facebook/detr-resnet-50',
inputs: imageBase64,
parameters: {
threshold: 0.5,
},
});
console.log('Detections:', response.detections);
// Each detection has: label, score, box (xmin, ymin, xmax, ymax)Image Segmentation
Segment regions in images:
const response = await openmodels.imageSegmentation({
model: 'mattmdjaga/segformer_b2_clothes',
inputs: imageBase64,
parameters: {
threshold: 0.5,
},
});
console.log('Segments:', response.segments);Image Text to Text (VLM)
Ask questions about images:
const response = await openmodels.imageTextToText({
model: 'zai-org/GLM-4.5V',
inputs: {
image: imageBase64,
text: 'What is in this image?',
},
parameters: {
max_new_tokens: 512,
temperature: 0.7,
},
});
console.log('Answer:', response.generated_text);Image to Image
Transform images:
const response = await openmodels.imageToImage({
model: 'timbrooks/instruct-pix2pix',
inputs: {
image: imageBase64,
prompt: 'Make it look like a watercolor painting',
},
parameters: {
strength: 0.8,
guidance_scale: 7.5,
},
});Video Tasks
Text to Video
Generate videos from text:
const response = await openmodels.textToVideo({
model: 'ali-vilab/text-to-video-ms-1.7b',
inputs: 'A cat playing with a ball',
parameters: {
num_frames: 16,
fps: 8,
guidance_scale: 7.5,
},
});
// Save video
const buffer = await response.video.arrayBuffer();
fs.writeFileSync('video.mp4', Buffer.from(buffer));Audio Tasks
Automatic Speech Recognition
Transcribe speech to text:
const response = await openmodels.automaticSpeechRecognition({
model: 'openai/whisper-large-v3',
inputs: audioBase64,
parameters: {
return_timestamps: true,
},
});
console.log('Transcription:', response.text);
if (response.chunks) {
console.log('Timestamps:', response.chunks);
}Audio Classification
Classify audio content:
const response = await openmodels.audioClassification({
model: 'MIT/ast-finetuned-audioset-10-10-0.4593',
inputs: audioBase64,
parameters: {
top_k: 5,
},
});
console.log('Classifications:', response.classifications);Provider Selection
Choose specific providers for better performance or availability:
// Automatic selection (default)
model: 'openai/gpt-oss-120b'
// Manual provider selection
model: 'openai/gpt-oss-120b:cerebras'
model: 'openai/gpt-oss-120b:fireworks'
model: 'openai/gpt-oss-120b:together'
model: 'Qwen/Qwen2.5-7B-Instruct:replicate'Available providers: cerebras, cohere, fal-ai, featherless, fireworks, groq, hyperbolic, hf-inference, nebius, novita, nscale, public-ai, replicate, sambanova, scaleway, together, zai
Streaming
Real-time token generation:
const stream = await openmodels.chat({
model: 'openai/gpt-oss-120b',
messages: [{ role: 'user', content: 'Write a poem' }],
stream: true,
}) as AsyncGenerator<string, void, unknown>;
for await (const token of stream) {
process.stdout.write(token);
}Model Registry
Get default models for tasks:
import { getDefaultModel, getModelsForTask, getAllTasks } from 'openmodels';
// Get default model for a task
const model = getDefaultModel('text-to-image');
// Returns: 'black-forest-labs/FLUX.1-schnell'
// Get all models for a task
const models = getModelsForTask('chat-completion');
// Returns: ['openai/gpt-oss-120b', 'Qwen/Qwen2.5-7B-Instruct', ...]
// Get all available tasks
const tasks = getAllTasks();
// Returns array of all 20 task namesError Handling
import { OpenModelsError } from 'openmodels';
try {
const response = await openmodels.chat({...});
} catch (error) {
if (error instanceof OpenModelsError) {
console.error('OpenModels error:', error.message);
}
}TypeScript Support
Full TypeScript definitions for all 20 tasks:
import {
// Chat
ChatCompletionRequest,
ChatCompletionResponse,
// Text
TextGenerationRequest,
FeatureExtractionRequest,
QuestionAnsweringRequest,
SummarizationRequest,
TranslationRequest,
TextClassificationRequest,
TokenClassificationRequest,
ZeroShotClassificationRequest,
FillMaskRequest,
TableQuestionAnsweringRequest,
// Image
TextToImageRequest,
ImageClassificationRequest,
ObjectDetectionRequest,
ImageSegmentationRequest,
ImageTextToTextRequest,
ImageToImageRequest,
// Video
TextToVideoRequest,
// Audio
AutomaticSpeechRecognitionRequest,
AudioClassificationRequest,
} from 'openmodels';CLI Tool
# Install globally
npm install -g openmodels
# Chat with models
openmodels chat "Explain quantum computing"
# Generate images
openmodels image "A beautiful sunset" --model black-forest-labs/FLUX.1-schnell
# Generate embeddings
openmodels embed "Hello world" --model intfloat/multilingual-e5-large
# List models
openmodels modelsConfiguration
const openmodels = client({
apiKey: 'om_...', // Your OpenModels API key
hfToken: 'hf_...', // Your HuggingFace token (optional)
});Examples
See /examples directory for complete working examples of all tasks.
Framework Integrations
LangChain
import { OpenModelsLLM, OpenModelsEmbeddings } from 'openmodels/integrations/langchain';
const llm = new OpenModelsLLM(
{ apiKey: 'om_...' },
{ model: 'openai/gpt-oss-120b' }
);LlamaIndex
from openmodels_llamaindex import OpenModelsLLM
llm = OpenModelsLLM(
api_key="om_...",
model="openai/gpt-oss-120b"
)License
MIT
