npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-lambda-service

v1.3.0

Published

Run a local REST server from a declarative JSON config with AI or JS handlers.

Downloads

514

Readme

ai-lambda-service

Run a local REST server from a declarative JSON config. Each endpoint can be backed by an OpenAI prompt, a JavaScript handler, a Microsoft 365 Copilot query via WorkIQ, or a chain of multiple endpoints for advanced workflows.

Installation

Install globally via npm:

npm install -g ai-lambda-service

Or add it as a project dependency:

npm install ai-lambda-service

Quick start

  1. Add your OpenAI key to .env:
OPENAI_API_KEY=sk-...
  1. Create a config file (e.g., config.json) — see Configuration below.

  2. Start the server:

ai-lambda-service start -c config.json

Or if installed locally, use npx:

npx ai-lambda-service start -c config.json
  1. Try the sample endpoints:
  • AI prompt: POST http://localhost:4000/ai-greeting with { "name": "Ada" }
  • JS handler: POST http://localhost:4000/sum with { "a": 1, "b": 2 }
  • AI prompt (GET): GET http://localhost:4000/countries?continent=Europe
  • WorkIQ query: GET http://localhost:4000/meetings?day=today&timeOfDay=morning
  • Local LLM: GET http://localhost:4000/local-joke?topic=programming (requires LM Studio)
  1. Or use the interactive dashboard — open http://localhost:4000 in your browser!

Interactive Dashboard

When you visit the root URL of your running service, you'll see an interactive dashboard that lets you explore and test all endpoints without writing any code or using curl.

Interactive Dashboard

Features:

  • Lists all configured endpoints with method, path, and description
  • Shows handler type (AI Prompt, JS Handler, or WorkIQ Query) for each endpoint
  • Auto-generates input fields based on each endpoint's inputSchema
  • Send requests with one click and see formatted JSON responses
  • Displays response status and timing information

This makes it easy to:

  • Explore available endpoints during development
  • Test endpoints with different inputs
  • Debug responses without leaving the browser
  • Share API documentation with your team

Configuration

Minimal config example

Save this as config.json (paths in jsHandler.file are relative to this file):

{
	"port": 4000,
	"endpoints": [
		{
			"name": "hello-ai",
			"description": "Return a friendly greeting using an OpenAI prompt.",
			"path": "/ai-greeting",
			"method": "POST",
			"inputSchema": {
				"type": "object",
				"required": ["name"],
				"properties": { "name": { "type": "string" } },
				"additionalProperties": false
			},
			"outputSchema": {
				"type": "object",
				"required": ["greeting"],
				"properties": { "greeting": { "type": "string" } },
				"additionalProperties": false
			},
			"aiPrompt": {
				"prompt": "Write a JSON object with a key 'greeting' that greets the provided name in one short sentence.",
				"model": "gpt-5-mini",
				"temperature": 1
			}
		},
		{
			"name": "sum-js",
			"description": "Sum two numbers using a JS handler.",
			"path": "/sum",
			"method": "POST",
			"inputSchema": {
				"type": "object",
				"required": ["a", "b"],
				"properties": { "a": { "type": "number" }, "b": { "type": "number" } },
				"additionalProperties": false
			},
			"outputSchema": {
				"type": "object",
				"required": ["sum"],
				"properties": { "sum": { "type": "number" } },
				"additionalProperties": false
			},
			"jsHandler": {
				"file": "handlers/sum.js"
			}
		}
	]
}

Create handlers/sum.js next to the config:

module.exports = async (input) => {
	return { sum: Number(input.a) + Number(input.b) };
};

Environment

  • .env is loaded automatically.
  • OPENAI_API_KEY is required for any endpoint using aiPrompt (unless using a local LLM server).

Local LLM Support (LM Studio, Ollama, etc.)

You can use any OpenAI-compatible LLM server instead of OpenAI. This works great with:

  • LM Studio - Run local models with a GUI
  • Ollama - Run local models from the command line
  • vLLM - High-throughput LLM serving
  • Any other OpenAI-compatible server

Configuration Options

| Field | Level | Description | |-------|-------|-------------| | defaultBaseUrl | Top-level | Default LLM server URL for all endpoints | | defaultModel | Top-level | Default model name for all endpoints | | defaultApiKey | Top-level | Default API key (falls back to OPENAI_API_KEY env var) | | baseUrl | Per-endpoint | Override server URL for this endpoint | | model | Per-endpoint | Override model for this endpoint | | apiKey | Per-endpoint | Override API key for this endpoint |

Example: Using LM Studio

  1. Start LM Studio and load a model (e.g., llama-3.2-3b-instruct)
  2. Enable the local server (default: http://localhost:1234/v1)
  3. Configure your endpoint:
{
  "defaultBaseUrl": "http://localhost:1234/v1",
  "defaultModel": "llama-3.2-3b-instruct",
  "endpoints": [
    {
      "name": "local-greeting",
      "description": "Generate a greeting using local LLM",
      "path": "/greeting",
      "method": "POST",
      "inputSchema": {
        "type": "object",
        "required": ["name"],
        "properties": { "name": { "type": "string" } }
      },
      "aiPrompt": {
        "prompt": "Generate a friendly greeting for the provided name."
      }
    }
  ]
}

Tips for Local LLMs

  • Plain text responses: Local models are more reliable without outputSchema. Omit it to get plain text responses instead of JSON.
  • Explicit prompts: Local models may need more explicit instructions than OpenAI models.
  • Model names: Use the exact model name shown in LM Studio or Ollama.
  • No API key required: Local servers typically don't require authentication.

Mixed Configuration (Local + Cloud)

You can use both local and cloud LLMs in the same config:

{
  "endpoints": [
    {
      "name": "cloud-endpoint",
      "description": "Uses OpenAI",
      "path": "/cloud",
      "method": "POST",
      "aiPrompt": {
        "prompt": "...",
        "model": "gpt-4o"
      }
    },
    {
      "name": "local-endpoint", 
      "description": "Uses local LM Studio",
      "path": "/local",
      "method": "POST",
      "aiPrompt": {
        "prompt": "...",
        "baseUrl": "http://localhost:1234/v1",
        "model": "llama-3.2-3b-instruct"
      }
    }
  ]
}

See CONFIG.md for full documentation on all aiPrompt options.

WorkIQ Integration

Endpoints can query Microsoft 365 Copilot using WorkIQ. This enables natural language queries about your emails, meetings, files, and other M365 data.

Prerequisites

  1. Install WorkIQ CLI globally:
npm install -g @microsoft/workiq
  1. Authenticate with your Microsoft 365 account:
workiq auth

WorkIQ Endpoint Example

{
  "name": "meetings",
  "description": "Get meetings for a specific day",
  "path": "/meetings",
  "method": "GET",
  "inputSchema": {
    "type": "object",
    "required": ["day"],
    "properties": {
      "day": { "type": "string", "description": "Date like 'today', 'tomorrow', or '2/4/2026'" },
      "timeOfDay": { "type": "string", "description": "morning, afternoon, or evening" }
    }
  },
  "outputSchema": {
    "type": "object",
    "required": ["meetings"],
    "properties": {
      "meetings": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "title": { "type": "string" },
            "time": { "type": "string" }
          }
        }
      }
    }
  },
  "workiqQuery": {
    "query": "What meetings do I have on {{day}} {{timeOfDay}}? Return as JSON with a 'meetings' array containing objects with 'title' and 'time' fields."
  }
}

How It Works

  • The query field supports {{variable}} placeholders that are replaced with input values
  • The service connects to WorkIQ's MCP (Model Context Protocol) server for efficient communication
  • Falls back to CLI execution if MCP is unavailable
  • EULA is automatically accepted on first connection

CLI

Implemented in bin/ai-lambda-service.js.

ai-lambda-service start -c <config.json> -p <port> -v <level>
ai-lambda-service stop
  • -c, --config: path to JSON config (default ./config.json)
  • -p, --port: port override (else uses config.port or 3000)
  • -v, --verbose: debug|info|warn|error (default info)

How it works

  • Config is validated with AJV in src/config.js.
  • Routes are bound in src/server.js using Express.
  • Each endpoint uses either an OpenAI chat completion, a JS handler, a WorkIQ query, or a chain of other endpoints via src/engine.js.
  • Input/output validation uses JSON Schema per-endpoint.
  • Output format: When outputSchema is defined, responses are JSON. Without it, AI endpoints return plain text directly—useful for translations, summaries, etc.

Chaining Endpoints

Create advanced workflows by chaining multiple endpoints together. The output of one endpoint becomes the input to the next:

{
  "name": "greeting-analyzed",
  "path": "/greeting-analyzed",
  "method": "POST",
  "chainHandler": {
    "steps": [
      {
        "name": "greet",
        "endpoint": "greeting",
        "input": { "name": "{{input.name}}" }
      },
      {
        "name": "analyze",
        "endpoint": "sentiment",
        "input": { "text": "{{greet.greeting}}" }
      }
    ],
    "output": {
      "greeting": "{{greet.greeting}}",
      "sentiment": "{{analyze.sentiment}}"
    }
  }
}

This chain:

  1. Generates a greeting for the provided name
  2. Analyzes the sentiment of the greeting
  3. Returns both the greeting and sentiment

Template syntax: Use {{input.field}}, {{stepName.field}}, or {{previousStep.field}} to reference data between steps.

Features:

  • Sequential execution with data flow between steps
  • Full validation at each step
  • Circular dependency detection at startup
  • Clear error messages with step context

See examples/chain.json for complete examples and CONFIG.md for full documentation.

Testing

npm test

Mocha + Supertest cover config loading and JS handler endpoints. Test fixtures live in test/fixtures.

License

ISC