npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@lmnr-ai/index

v0.0.1-alpha.1

Published

The SOTA Open-Source Browser Agent for autonomously performing complex tasks on the web

Readme

Static Badge X (formerly Twitter) Follow Static Badge

Index

Index is the SOTA open-source browser agent for autonomously executing complex tasks on the web.

  • [x] Powered by reasoning LLMs with vision capabilities using AI SDK.
    • [x] Gemini 2.5 Pro (really fast and accurate)
    • [x] Claude 3.7 Sonnet with extended thinking (reliable and accurate)
    • [x] OpenAI o4-mini (depending on the reasoning effort, provides good balance between speed, cost and accuracy)
    • [x] Gemini 2.5 Flash (really fast, cheap, and good for less complex tasks)
  • [] [WIP]index run to run the agent in the interactive CLI
  • [x] Index is also available as a serverless API.
  • [x] You can also try out Index via Chat UI.
  • [x] Supports advanced browser agent observability powered by open-source platform Laminar.
  • [x] npm install @lmnr-ai/index and use it in your project

prompt: go to ycombinator.com. summarize first 3 companies in the W25 batch and make new spreadsheet in google sheets.

https://github.com/user-attachments/assets/2b46ee20-81b6-4188-92fb-4d97fe0b3d6a

Documentation

Check out full documentation here

Install dependencies

npm install @lmnr-ai/index

Setup model API keys

Setup your model API keys in .env file in your project root:

GOOGLE_GENERATIVE_AI_API_KEY=
ANTHROPIC_API_KEY=
OPENAI_API_KEY=

Run Index with code

import { google } from "@ai-sdk/google";

async function main() {
  // Any provider and model can go here as long as it supports vision
  const agent = new Agent(google("gemini-2.5-pro-preview-05-06"));

  const output = await agent.run({
    prompt: "Navigate to news.ycombinator.com, find a post about AI, and summarize it",
  });
  console.log(output.result);
}

// .catch to avoid top level awaits
main().catch((e) => console.error(e));

Streaming mode

import { google } from "@ai-sdk/google";

async function main() {
  const agent = new Agent(google("gemini-2.5-pro-preview-05-06"));
  const output = await agent.runStream({
    prompt: "Navigate to news.ycombinator.com, find a post about AI, and summarize it",
  });
  
  for await (const chunk of output) {
    console.log(chunk);
  }
}

// .catch to avoid top level awaits
main().catch((e) => console.error(e));

Control logging level

By default, Index log level is warn and above, which is quite silent. You can set the log level by setting the LMNR_LOG_LEVEL environment variable, e.g.

export LMNR_LOG_LEVEL=info

Use Index via API

The easiest way to use Index in production is with serverless API. Index API manages remote browser sessions, agent infrastructure and browser observability. To get started, create a project API key in Laminar.

Install Laminar

npm install @lmnr-ai/lmnr

Make an API call

import { LaminarClient, Laminar } from '@lmnr-ai/lmnr';

// Initialize tracing first
Laminar.initialize();

const client = new LaminarClient({
  projectApiKey:"<YOUR_PROJECT_API_KEY>",
});

const response = await client.agent.run({
    prompt: "What is the weather in London today?",
});

for await (const chunk of response) {
  console.log(chunk.chunkType)
  if (chunk.chunkType === 'step') {
    console.log(chunk.summary);
  } else if (chunk.chunkType === 'finalOutput') {
    console.log(chunk.content.result);
  }
}

Browser agent observability

Both code run and API run provide advanced browser observability. To trace Index agent's actions and record browser session you simply need to initialize Laminar tracing before running the agent.

import { Laminar } from '@lmnr-ai/lmnr';

Laminar.initialize({
    projectApiKey: "<YOUR_PROJECT_API_KEY>"
})

This line above must be run at your application entrypoint after initializing any other OpenTelemetry tracing.

For Next.js apps, place this in register in instrumentation.ts (Learn more).

Then you will get full observability on the agent's actions synced with the browser session in the Laminar platform. Learn more about browser agent observability in the documentation.


Made with ❤️ by the Laminar team