npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

easy-llm-call

v2.2.7

Published

an easy way to make call to LLMs with function call and tools. tested by: Deepseek, openAI, ollama

Readme

easy-llm-call

easy-llm-call banner

Build tool-aware LLM workflows in Node.js or React with a few lines of code.

Table of Contents

Overview

easy-llm-call wraps common LLM/chat-completion patterns (including function/tool calling) behind a compact API. It ships with:

  • EasyLLM: drop-in client for OpenAI-compatible HTTP chat endpoints (tested with DeepSeek).
  • EasyOllama: local Ollama chat client with tool calling support.
  • React hooks that manage message history, loading states, errors, and tool wiring.
  • A shared toolkit for registering tools, retrying calls, aborting requests, and normalizing messages.

Use this package when you want tooling-enabled assistants across hosted or local models without re-implementing the control loop.

Key Features

  • Tool/function calling with automatic argument parsing and tool result injection.
  • Configurable retry strategy with exponential-style delays.
  • Abortable requests via AbortController.
  • Unified callback interface for streaming UI updates or server logging.
  • TypeScript-first design with ambient type definitions for every surface area.
  • React hooks (useEasyLLM, useEasyOllama) to integrate assistants in minutes.

Package Layout

src/
├── lib/
│   ├── easy-llm/        # Hosted LLM helper (DeepSeek, OpenAI-compatible)
│   └── easy-ollama/     # Local Ollama helper with tool calling
├── react/
│   └── components/      # React hooks for both helpers
├── types/               # Shared ambient type definitions
└── utils/               # axios retry helper
test/
├── app.ts               # Interactive EasyLLM demo (DeepSeek, OpenAI, etc.)
└── ollama.ts            # Interactive EasyOllama demo

All public exports flow through:

  • src/lib/index.tsEasyLLM, EasyOllama
  • src/react/index.tsuseEasyLLM, useEasyOllama
  • src/types/index.d.ts → ambient types for library consumers

Installation

npm install easy-llm-call
# or
yarn add easy-llm-call

Peer dependency: react >= 17 (only needed for hook usage).

Quick Start – DeepSeek / OpenAI Style

import { EasyLLM } from 'easy-llm-call';

const llm = EasyLLM({
  apiKey: process.env.DEEPSEEK_API_KEY,
  url: 'https://api.deepseek.com/chat/completions',
})
  .registerTool('get_time', {
    desc: 'Return the current ISO timestamp',
    func: () => new Date().toISOString(),
  })
  .onMessage((message) => {
    console.log('[assistant]', message.content);
  })
  .onToolCall((id, name, args) => {
    console.log('[tool call]', { id, name, args });
  })
  .onToolResult((id, name, result) => {
    console.log('[tool result]', { id, name, result });
  })
  .onError((err) => {
    console.error('[error]', err);
  });

llm.send({
  model: 'deepseek-chat',
  tool_choice: 'auto',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What time is it in UTC?' },
  ],
});

Quick Start – Ollama (local)

import { EasyOllama } from 'easy-llm-call';

const ollama = EasyOllama({
  url: 'http://127.0.0.1:11434/api/chat', // default
})
  .registerTool('get_weather', {
    desc: 'Fetches weather for a city',
    props: {
      city: { desc: 'City name', required: true },
    },
    func: async ({ city }) => {
      // call your own weather API here
      return `Weather for ${city} is 22°C and sunny`;
    },
  })
  .onMessage((message) => console.log('[assistant]', message.content))
  .onToolCall((id, name, args) => console.log('[tool call]', { id, name, args }))
  .onError((err) => console.error(err));

ollama.send({
  model: 'llama3.1:8b',
  tool_choice: 'auto',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Should I carry an umbrella in Tokyo today?' },
  ],
});

ℹ️ The Ollama API must be running locally with a model that supports tool/function calling.

React Hooks

import { useEasyLLM } from 'easy-llm-call/react';

export function ChatWidget() {
  const {
    messages,
    loading,
    send,
    errors: { axiosErrors, toolErrors },
  } = useEasyLLM({
    apiKey: process.env.DEEPSEEK_API_KEY!,
    systemPrompt: 'You are a friendly concierge.',
    tools: [
      {
        name: 'get_time',
        desc: 'Return current ISO timestamp',
        func: () => new Date().toISOString(),
      },
    ],
  });

  return (
    <div>
      <ul>
        {messages.map(({ role, content, timestamp }) => (
          <li key={timestamp}>
            <strong>{role}:</strong> {content}
          </li>
        ))}
      </ul>
      <button disabled={loading} onClick={() => send({
        message: { role: 'user', content: 'Ping!' },
        model: 'deepseek-chat',
        tool_choice: 'auto',
      })}>
        {loading ? 'Waiting…' : 'Send'}
      </button>
      {axiosErrors.length > 0 && <pre>{axiosErrors.at(-1)?.message}</pre>}
      {toolErrors.length > 0 && <pre>{toolErrors.at(-1)?.message}</pre>}
    </div>
  );
}

The hook mirrors the plain factory API while maintaining stateful message history, loading flags, and error buckets for UI binding.

An equivalent useEasyOllama hook targets local models:

import { useEasyOllama } from 'easy-llm-call/react';

const { messages, send } = useEasyOllama({
  systemPrompt: 'You are a local assistant.',
  tools: [
    {
      name: 'echo',
      desc: 'Return the same text back',
      props: { text: { required: true } },
      func: ({ text }) => text,
    },
  ],
});

Tool Recipes

1. Basic synchronous tool

llm.registerTool('get_version', {
  desc: 'Return Node.js version',
  func: () => process.version,
});

2. Tools with typed parameters

llm.registerTool('calculate_bmi', {
  desc: 'Compute BMI from height and weight',
  props: {
    height_cm: { type: 'number', required: true },
    weight_kg: { type: 'number', required: true },
  },
  func: ({ height_cm, weight_kg }) => {
    const meters = height_cm / 100;
    return (weight_kg / (meters * meters)).toFixed(2);
  },
});

3. Async tools that call external APIs

llm.registerTool('search_docs', {
  desc: 'Search documentation',
  props: { query: { required: true } },
  func: async ({ query }) => {
    const res = await fetch('https://docs.example.com/search?q=' + encodeURIComponent(query));
    const { results } = await res.json();
    return results.slice(0, 3);
  },
});

4. Shared tool registry

Create a reusable helper:

// tools.ts
export const registerCommonTools = (client) =>
  client
    .registerTool('get_time', { func: () => new Date().toISOString(), desc: 'Now in ISO' })
    .registerTool('echo', {
      desc: 'Echo input text',
      props: { text: { required: true } },
      func: ({ text }) => text,
    });
import { EasyOllama } from 'easy-llm-call';
import { registerCommonTools } from './tools';

const client = registerCommonTools(EasyOllama());

Configuration Reference

Factory options

| Option | EasyLLM | EasyOllama | Default | Notes | |----------------------|---------|------------|----------------------------------------|-------------------------------------| | url | ✔ | ✔ | DeepSeek / http://127.0.0.1:11434/api/chat | Override API endpoint | | apiKey | ✔ | ✖ | undefined | Injected in Authorization header | | headers | ✖ | ✔ | {} | Extra headers for Ollama | | timeoutMS | ✔ | ✔ | 180000 | Axios request timeout | | retries | ✔ | ✔ | 3 | Number of retry attempts | | retryDelay | ✔ | ✔ | 1000 | Delay between retries (ms) | | betweenRequestDelay| ✔ | ✔ | 0 | Sleep after tool calls before retry |

send(...) payload

Both helpers expect objects compatible with their respective chat endpoints:

  • EasyLLM.send(request) matches OpenAI/DeepSeek ChatCompletionRequest
  • EasyOllama.send(request) matches the Ollama /api/chat payload

Common fields:

  • model – required model name.
  • messages – array of chat messages; tools inject tool responses automatically.
  • tools – optional, auto-populated from registered tool schemas.
  • tool_choice'auto' | 'none' | string to control invocation.

Callbacks & Lifecycle

Every factory exposes chainable listeners:

  • onMessage(message) – fires for assistant messages (including tool-call previews).
  • onError(error) – called once per failing request (after retries exhausted).
  • onStateChange(loading) – toggles true/false around request cycles.
  • onToolCall(id, toolName, args) – before executing a registered tool.
  • onToolResult(id, toolName, result) – after tool resolves.
  • onToolError(id, toolName, error) – when a tool throws/errors.

Return value from registerTool and send is the same client, enabling fluent chaining.

Error Handling

  • Network failures are retried using axiosWithRetry (see src/utils/retry.ts).
  • Tool exceptions are captured, forwarded to callbacks, and converted into synthetic tool messages so the model can react.
  • To cancel a long-running request, invoke client.abort(). The current AbortController is swapped in before each call.

Development & Testing

# Build TypeScript
npm run build

# Interactive DeepSeek-style demo
npm run test -- "<YOUR_MODEL_KEY>"

# Interactive Ollama demo (model, endpoint optional)
npm run test:ollama llama3.1:8b

During development you may inspect the TypeScript sources directly (src/); compiled JavaScript and declaration files live in dist/.

License

MIT © Arna051