npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@layer-ai/sdk

v2.5.11

Published

Configure multiple AI models at runtime without code changes or deployments

Downloads

2,342

Readme

@layer-ai/sdk

TypeScript/JavaScript SDK for Layer AI - Intelligent LLM inference with smart routing and fallbacks.

v1.0.0: This package is now inference-only. For admin operations (managing gates, keys, logs), use @layer-ai/admin.

Installation

npm install @layer-ai/sdk
# or
pnpm add @layer-ai/sdk
# or
yarn add @layer-ai/sdk

Quick Start

import { Layer } from '@layer-ai/sdk';

const layer = new Layer({
  apiKey: process.env.LAYER_API_KEY
});

// Make an inference request through a gate
const response = await layer.complete({
  gate: '435282da-4548-4e08-8f9e-a6104803fb8a',  // Gate ID (UUID)
  data: {
    messages: [
      { role: 'user', content: 'Explain quantum computing in simple terms' }
    ]
  }
});

console.log(response.content);

Migrating from v0.x?

See the Migration Guide for detailed upgrade instructions.

Key Changes:

  • SDK is now inference-only - use @layer-ai/admin for management operations
  • Gate IDs (UUIDs) required instead of gate names
  • Request format changed to include data wrapper

Configuration

Constructor Options

const layer = new Layer({
  apiKey: string;        // Required: Your Layer API key
  baseUrl?: string;      // Optional: API base URL (default: https://api.uselayer.ai)
});

API Reference

Type-Safe Methods (v2.5.0+)

Layer SDK now provides dedicated type-safe methods for each modality with full TypeScript support and IDE autocomplete.

layer.chat(request)

Type-safe chat completions with message-based interface.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    messages: Message[]; // Required: Conversation messages
    temperature?: number;  // Optional: Override gate temperature
    maxTokens?: number;    // Optional: Override max tokens
    topP?: number;         // Optional: Override top-p sampling
  };
  model?: string;        // Optional: Override gate model
  metadata?: Record<string, unknown>; // Optional: Custom metadata
}

Example:

const response = await layer.chat({
  gateId: 'my-chat-gate-id',
  data: {
    messages: [
      { role: 'system', content: 'You are a helpful assistant' },
      { role: 'user', content: 'Explain quantum computing' }
    ],
    temperature: 0.7
  }
});

layer.image(request)

Type-safe image generation.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    prompt: string;      // Required: Image generation prompt
    size?: string;       // Optional: Image size (e.g., '1024x1024')
    quality?: string;    // Optional: Image quality
    style?: string;      // Optional: Image style
  };
  model?: string;
  metadata?: Record<string, unknown>;
}

Example:

const response = await layer.image({
  gateId: 'my-image-gate-id',
  data: {
    prompt: 'A serene landscape with mountains and a lake',
    size: '1024x1024',
    quality: 'hd'
  }
});

console.log(response.imageUrl); // Generated image URL

layer.video(request)

Type-safe video generation.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    prompt: string;      // Required: Video generation prompt
  };
  model?: string;
  metadata?: Record<string, unknown>;
}

layer.embeddings(request)

Type-safe text embeddings.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    input: string | string[]; // Required: Text(s) to embed
  };
  model?: string;
  metadata?: Record<string, unknown>;
}

Example:

const response = await layer.embeddings({
  gateId: 'my-embeddings-gate-id',
  data: {
    input: 'Machine learning is fascinating'
  }
});

console.log(response.embeddings[0].length); // Vector dimensions (e.g., 1536)

layer.tts(request)

Type-safe text-to-speech.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    input: string;       // Required: Text to synthesize
    voice?: string;      // Optional: Voice selection
  };
  model?: string;
  metadata?: Record<string, unknown>;
}

Example:

const response = await layer.tts({
  gateId: 'my-tts-gate-id',
  data: {
    input: 'Hello, this is a test of text to speech',
    voice: 'alloy'
  }
});

console.log(response.audio.base64); // Base64 encoded audio
console.log(response.audio.format); // Audio format (e.g., 'mp3')

layer.ocr(request)

Type-safe optical character recognition and document processing.

Parameters:

{
  gateId: string;        // Required: Gate ID (UUID)
  data: {
    documentUrl?: string;  // Document URL
    imageUrl?: string;     // Image URL
    base64?: string;       // Base64 encoded document/image
    // Note: Provide one of the above
  };
  model?: string;
  metadata?: Record<string, unknown>;
}

layer.complete(request) (v2 Legacy)

Send a generic completion request through a gate. This method remains available for backwards compatibility.

Parameters:

{
  gate: string;          // Required: Gate ID (UUID)
  data: {
    messages: Message[]; // Required: Conversation messages
    temperature?: number;  // Optional: Override gate temperature
    maxTokens?: number;    // Optional: Override max tokens
    topP?: number;         // Optional: Override top-p sampling
  };
  model?: string;        // Optional: Override gate model
  type?: 'chat';        // Optional: Request type (default: 'chat')
}

Response:

{
  content: string;       // Generated text
  model: string;         // Model used (may differ from requested if fallback occurred)
  finishReason: string;  // Why generation stopped
  usage: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };
  cost: number;          // Cost in USD
  latencyMs: number;     // Request latency
}

Example:

const response = await layer.complete({
  gate: '435282da-4548-4e08-8f9e-a6104803fb8a',
  data: {
    messages: [
      { role: 'system', content: 'You are a helpful coding assistant' },
      { role: 'user', content: 'Write a hello world function in Python' }
    ],
    temperature: 0.7,
    maxTokens: 500
  }
});

console.log(response.content);
console.log(`Cost: $${response.cost.toFixed(6)}`);
console.log(`Tokens: ${response.usage.totalTokens}`);

layer.models

Access to the model registry utilities.

// Get all available models
const models = layer.models.getAll();

// Get models by provider
const openaiModels = layer.models.getByProvider('openai');

// Get model metadata
const model = layer.models.get('gpt-4o');

Smart Routing & Fallbacks

Layer AI automatically handles model fallbacks when configured:

// If your gate has fallback models configured:
// Primary: gpt-4o
// Fallbacks: [claude-sonnet-4, gemini-2.0-flash-exp]

const response = await layer.complete({
  gate: 'my-gate-id',
  data: { messages: [...] }
});

// If gpt-4o fails, automatically tries claude-sonnet-4
// If that fails, tries gemini-2.0-flash-exp
// Returns the first successful response

Parameter Overrides

Gates can allow or restrict parameter overrides:

// If gate allows temperature overrides
const response = await layer.complete({
  gate: 'my-gate-id',
  data: {
    messages: [...],
    temperature: 0.9  // Override gate's default
  }
});

// If override not allowed, gate's default is used

TypeScript Support

Full TypeScript support with exported types:

import type {
  Gate,
  GateConfig,
  Log,
  ApiKey,
  SupportedModel,
  LayerRequest,
  LayerResponse
} from '@layer-ai/sdk';

Error Handling

try {
  const response = await layer.complete({
    gate: 'my-gate-id',
    data: { messages: [...] }
  });
} catch (error) {
  if (error instanceof Error) {
    console.error('Layer error:', error.message);
    // Handle: authentication, rate limits, model failures, etc.
  }
}

Examples

Basic Chatbot

import { Layer } from '@layer-ai/sdk';

const layer = new Layer({ apiKey: process.env.LAYER_API_KEY });

async function chat(userMessage: string) {
  const response = await layer.complete({
    gate: process.env.CHATBOT_GATE_ID!,
    data: {
      messages: [
        { role: 'user', content: userMessage }
      ]
    }
  });

  return response.content;
}

const answer = await chat('What is the capital of France?');
console.log(answer);

Multi-turn Conversation

const messages = [
  { role: 'user', content: 'Hello!' },
  { role: 'assistant', content: 'Hi! How can I help you today?' },
  { role: 'user', content: 'Tell me about quantum computing' }
];

const response = await layer.complete({
  gate: 'chat-gate-id',
  data: { messages }
});

messages.push({
  role: 'assistant',
  content: response.content
});

With Model Override

const response = await layer.complete({
  gate: 'my-gate-id',
  model: 'claude-sonnet-4',  // Override gate's default model
  data: {
    messages: [
      { role: 'user', content: 'Explain relativity' }
    ]
  }
});

Admin Operations

For managing gates, API keys, and logs, use the separate admin package:

npm install @layer-ai/admin
import { LayerAdmin } from '@layer-ai/admin';

const admin = new LayerAdmin({ apiKey: process.env.LAYER_ADMIN_KEY });

// Create a gate
const gate = await admin.gates.create({
  name: 'my-gate',
  model: 'gpt-4o-mini',
  systemPrompt: 'You are a helpful assistant'
});

// Use the gate ID for completions
const response = await layer.complete({
  gate: gate.id,
  data: { messages: [...] }
});

See the @layer-ai/admin documentation for details.

Database Migrations

If you're self-hosting Layer AI, the SDK includes database migrations:

# Run migrations
cd node_modules/@layer-ai/core
npm run migrate

Migrations are located in @layer-ai/core/dist/lib/db/migrations/

Related Packages

License

MIT