npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@lmnr-ai/lmnr

v0.7.12

Published

TypeScript SDK for Laminar AI

Downloads

25,439

Readme

Laminar Typescript

JavaScript/TypeScript SDK for Laminar.

Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.

Check our open-source repo and don't forget to star it ⭐

NPM Version NPM Downloads

Quickstart

npm install @lmnr-ai/lmnr

And then in the code

import { Laminar } from '@lmnr-ai/lmnr'

Laminar.initialize({ projectApiKey: '<PROJECT_API_KEY>' })

This will automatically instrument most of the LLM, Vector DB, and related calls with OpenTelemetry-compatible instrumentation.

Read docs to learn more.

Auto-instrumentations are provided by OpenLLMetry.

Where to place Laminar.initialize()

Laminar.initialize() must be called

  • once in your application,
  • as early as possible, but after other instrumentation libraries

Instrumentation

In addition to automatic instrumentation, we provide a simple @observe() decorator. This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.

Example

import { OpenAI } from 'openai';
import { Laminar as L, observe } from '@lmnr-ai/lmnr';

L.initialize({ projectApiKey: "<LMNR_PROJECT_API_KEY>" });

const client = new OpenAI({ apiKey: '<OPENAI_API_KEY>' });

const poemWriter = async (topic = "turbulence") => {
  const prompt = `write a poem about ${topic}`;
  const response = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: prompt }
    ]
  });

  const poem = response.choices[0].message.content;
  return poem;
}

// Observe the function like this
await observe({name: 'poemWriter'}, async () => await poemWriter('laminar flow'))

Sending spans to Laminar from a different tracing library

Many tracing libraries accept spanProcessors as an initialization parameter.

Laminar exposes LaminarSpanProcessor that you could use for these purposes.

Be careful NOT to call Laminar.initialize in such setup, to avoid double tracing.

Example with @vercel/otel

For example, in Next.js instrumentation.ts you could do:

import { registerOTel } from '@vercel/otel'

export async function register() {
  if (process.env.NEXT_RUNTIME === "nodejs") {
    const { Laminar, LaminarSpanProcessor, initializeLaminarInstrumentations } = await import("@lmnr-ai/lmnr");
    registerOTel({
      serviceName: "my-service",
      spanProcessors: [
        new LaminarSpanProcessor(),
      ],
      instrumentations: initializeLaminarInstrumentations(),
    });
  }
}

Evaluations

Quickstart

Install the package:

npm install @lmnr-ai/lmnr

Create a file named my-first-eval.ts with the following code:

import { evaluate } from '@lmnr-ai/lmnr';

const writePoem = ({topic}: {topic: string}) => {
    return `This is a good poem about ${topic}`
}

evaluate({
    data: [
        { data: { topic: 'flowers' }, target: { poem: 'This is a good poem about flowers' } },
        { data: { topic: 'cars' }, target: { poem: 'I like cars' } },
    ],
    executor: (data) => writePoem(data),
    evaluators: {
        containsPoem: (output, target) => target.poem.includes(output) ? 1 : 0
    },
    groupId: 'my_first_feature'
})

Run the following commands:

export LMNR_PROJECT_API_KEY=<LMNR_PROJECT_API_KEY>  # get from Laminar project settings
npx lmnr eval my-first-eval.ts

Visit the URL printed in the console to see the results.

Overview

Bring rigor to the development of your LLM applications with evaluations.

You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) to evaluate function.

evaluate takes in the following parameters:

  • data – an array of Datapoint objects, where each Datapoint has two keys: target and data, each containing a key-value object.
  • executor – the logic you want to evaluate. This function must take data as the first argument, and produce any output.
  • evaluators – Object which maps evaluator names to evaluators. Each evaluator is a function that takes output of executor as the first argument, target as the second argument and produces numeric scores. Each function can produce either a single number or Record<string, number> of scores.
  • name – optional name for the evaluation. Automatically generated if not provided.
  • groupName – optional group name for evaluation. Evaluations within the same group can be compared visually side by side.
  • config – optional additional override parameters.

* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes in data and returns only needed value(s) from it.

Read docs to learn more about evaluations.

Client for HTTP operations

Various interactions with Laminar API are available in LaminarClient

Agent

To run Laminar agent, you can invoke client.agent.run

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey:"<YOUR_PROJECT_API_KEY>",
});

const response = await client.agent.run({
    prompt: "What is the weather in London today?",
});

// Be careful, `response` itself contains the state which may get large
console.log(response.result.content)

Streaming

Agent run supports streaming as well.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey:"<YOUR_PROJECT_API_KEY>",
});

const response = await client.agent.run({
    prompt: "What is the weather in London today?",
});

for await (const chunk of response) {
  console.log(chunk.chunkType)
  if (chunk.chunkType === 'step') {
    console.log(chunk.summary);
  } else if (chunk.chunkType === 'finalOutput') {
    // Be careful, `chunk.content` contains the state which may get large
    console.log(chunk.content.result);
  }
}