npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-agent-enterprise

v1.5.0

Published

AI Agent simplifies the implementation and use of generative AI with LangChain, you can add components such as vectorized search services, conversation history, custom databases and API contracts

Downloads

212

Readme

Publish new version to NPM

AI Agent

AI Agent simplifies the implementation and use of generative AI with LangChain, you can add components such as vectorized search services (check options in "link"), conversation history (check options in "link"), custom databases (check options in "link") and API contracts (OpenAPI).

Installation

Use the package manager npm to install AI Agent.

npm install ai-agent

Simple use

LLM + Prompt Engineering

const agent = new Agent({
  name: '<name>',
  systemMesssage: '<a message that will specialize your agent>',
  llmConfig: {
    type: '<cloud-provider-llm-service>', // Check availability at <link>
    model: '<llm-model>',
    instance: '<instance-name>', // Optional
    apiKey: '<key-your-llm-service>', // Optional
  },
  chatConfig: {
    temperature: 0,
  },
});

// If stream enabled, receiver on token
agent.on('onToken', async (token) => {
  console.warn('token:', token);
});

agent.on('onMessage', async (message) => {
  console.warn('MESSAGE:', message);
});

await agent.call({
  question: 'What is the best way to get started with Azure?',
  chatThreadID: '<chat-id>',
  stream: true,
});

Using with Chat History

When you use LLM + Chat history all message exchange is persisted and sent to LLM.

  const agent = new Agent({
    name: '<name>',
    systemMesssage: '<a message that will specialize your agent>',
    chatConfig: {
      temperature: 0,
    },
    llmConfig: {
      type: '<cloud-provider-llm-service>', // Check availability at <link>
      model: '<llm-model>',
      instance: '<instance-name>', // Optional
      apiKey: '<key-your-llm-service>', // Optional
    },
    dbHistoryConfig: {
      type: '<type-database>', // Check availability at <link>
      host: '<host-database>', // Optional
      port: "<port-database>", // Optional
      sessionTTL: '<ttl-database>' // Optional. Time the conversation will be saved in the database
      limit: '<limit-messages>' // Optional. Limit set for maximum messages included in conversation prompt
    },
  });

  // If stream enabled, receiver on token
  agent.on('onToken', async (token) => {
    console.warn('token:', token);
  });

  agent.on('onMessage', async (message) => {
    console.warn('MESSAGE:', message);
  });

  await agent.call({
    question: 'What is the best way to get started with Azure?',
    chatThreadID: '<chat-id>',
    stream: true,
  });

Using with Vector stores

When using LLM + Vector stores the Agent finds the documents relevant to the requested input. The documents found are used for the context of the Agent.

Example of the concept of vectorized search

  const agent = new Agent({
    name: '<name>',
    systemMesssage: '<a message that will specialize your agent>',
    chatConfig: {
      temperature: 0,
    },
    llmConfig: {
      type: '<cloud-provider-llm-service>', // Check availability at <link>
      model: '<llm-model>',
      instance: '<instance-name>', // Optional
      apiKey: '<key-your-llm-service>', // Optional
    },
    vectorStoreConfig: {
      type: '<type-vector-service>', // Check availability at <link>
      apiKey: '<your-api-key>', // Optional
      indexes: ['<index-name>'], // Your indexes name. Optional
      vectorFieldName: '<vector-base-field>', // Optional
      name: '<vector-service-name>', // Optional
      apiVersion: "<api-version>", // Optional
      model: '<llm-model>' // Optional
      customFilters: '<custom-filter>' // Optional. Example: 'field-vector-store=(userSessionId)' check at <link>
    },
  });

  // If stream enabled, receiver on token
  agent.on('onToken', async (token) => {
    console.warn('token:', token);
  });

  agent.on('onMessage', async (message) => {
    console.warn('MESSAGE:', message);
  });

  await agent.call({
    question: 'What is the best way to get started with Azure?',
    chatThreadID: '<chat-id>',
    stream: true,
  });

Using with Database custom

SQL + LLM for prompt construction is a concept that involves using both Structured Query Language (SQL) and LLMs to create queries or prompts for data retrieval or interaction with databases. This approach leverages the power of SQL for database-specific commands and the capabilities of LLMs to generate natural language prompts, making it easier for users to interact with databases and retrieve information in a more user-friendly and intuitive manner.

Example of the concept of SQL + LLM

const agent = new Agent({
  name: '<name>',
  systemMesssage: '<a message that will specialize your agent>',
  chatConfig: {
    temperature: 0,
  },
  llmConfig: {
    type: '<cloud-provider-llm-service>', // Check availability at <link>
    model: '<llm-model>',
    instance: '<instance-name>', // Optional
    apiKey: '<key-your-llm-service>', // Optional
  },
  dataSourceConfig: {
    type: '<type-database>', // Check availability at <link>
    username: '<username-database>', // Require
    password: '<username-pass>', // Require
    host: '<host-database>', // Require
    name: '<connection-name>', // Require
    includesTables: ['<table-name>'], // Optional
    ssl: '<ssl-mode>', // Optional
    maxResult: '<max-result-database>', // Optional. Limit set for maximum data included in conversation prompt.
    customizeSystemMessage: '<custom-chain-prompt>', // Optional. Adds prompt specifications for custom database operations.
  },
});

// If stream enabled, receiver on token
agent.on('onToken', async (token) => {
  console.warn('token:', token);
});

agent.on('onMessage', async (message) => {
  console.warn('MESSAGE:', message);
});

await agent.call({
  question: 'What is the best way to get started with Azure?',
  chatThreadID: '<chat-id>',
  stream: true,
});

Using with OpenAPI contract

OpenAPI + LLM for prompt construction is a concept that combines OpenAPI, a standard for documenting and describing RESTful APIs, with large language models (LLMs). This fusion allows for the automated generation of prompts or queries for interacting with APIs. By using LLMs to understand the OpenAPI specifications and generate natural language prompts, it simplifies and streamlines the process of interfacing with APIs, making it more user-friendly and accessible.

Example of the concept of SQL + OpenAPI

const agent = new Agent({
  name: '<name>',
  systemMesssage: '<a message that will specialize your agent>',
  chatConfig: {
    temperature: 0,
  },
  llmConfig: {
    type: '<cloud-provider-llm-service>', // Check availability at <link>
    model: '<llm-model>',
    instance: '<instance-name>', // Optional
    apiKey: '<key-your-llm-service>', // Optional
  },
  openAPIConfig: {
    xApiKey: '<x-api-key>', // Optional. Using request API
    data: '<data-contract>', // Require. OpenAPI contract
    customizeSystemMessage: '<custom-chain-prompt>', // Optional. Adds prompt specifications for custom openAPI operations.
  },
});

// If stream enabled, receiver on token
agent.on('onToken', async (token) => {
  console.warn('token:', token);
});

agent.on('onMessage', async (message) => {
  console.warn('MESSAGE:', message);
});

await agent.call({
  question: 'What is the best way to get started with Azure?',
  chatThreadID: '<chat-id>',
  stream: true,
});

Contributing

If you've ever wanted to contribute to open source, and a great cause, now is your chance!

See the contributing docs for more information

Contributors ✨

License

Apache-2.0