npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

huxy-llm-api

v1.1.2

Published

一个简洁、易用的用于简化 Ollama 和 OpenAI API 调用的 Node.js 库。

Readme

huxy-llm-api

一个简洁、易用的用于简化 Ollama 和 OpenAI API 调用的 Node.js 库。支持流式响应和自定义配置,适用于快速集成大语言模型服务。

特性

  • 统一接口:提供一致的 API 调用方式,支持 Ollama 和 OpenAI 两种服务
  • 流式支持:内置流式响应处理,支持实时回调
  • 灵活配置:支持自定义模型参数和 API 配置
  • 错误处理:内置参数验证和错误处理机制
  • 多功能支持:支持聊天、文本生成、响应处理等多种 AI 功能
  • 自定义 Fetch:使用 Undici 实现高性能 HTTP 请求

安装

npm install huxy-llm-api
# 或
pnpm add huxy-llm-api
# 或
yarn add huxy-llm-api

快速开始

基本用法

import startApi from 'huxy-llm-api';

// 初始化 Ollama API
const ollamaApi = startApi('ollama', {
  apiKey: 'your-api-key',
  host: 'http://localhost:11434',
  dispatcher: {
    headersTimeout: 10 * 60 * 1000,
  },
}, {
  model: 'qwen3-vl:latest',
  options: {
    num_ctx: 4096,
  },
});

// 初始化 OpenAI API
const openaiApi = startApi('openai', {
  apiKey: 'your-api-key',
  baseURL: 'https://api.openai.com/v1',
});

调用示例

Ollama - 生成文本

const result = await ollamaApi.generate('你好', {
  model: 'qwen3-vl',
  stream: false,
  options: {
    temperature: 0.15,
    top_p: 0.9,
  },
}, (message) => {
  console.log('实时响应:', message);
});

console.log('最终结果:', result);

OpenAI - 聊天对话

const response = await openaiApi.chat('你是谁', {
  model: 'gpt-3.5-turbo',
  temperature: 0.7,
  stream: true,
}, (message, rawResponse) => {
  console.log('实时消息:', message);
  console.log('原始响应:', rawResponse);
});

console.log('对话结果:', response);

图片处理函数

  • saveImage(base64String, outputDir = './images', name): 保存 base64 图片到本地。
  • imageToBase64(imagePath, includeMimeType = false): 将本地图片转为 base64 字符串。

使用:

// imageToBase64
const image = await ollamaApi.imageToBase64('./example.png');
const result = await ollamaApi.generate('你好', {
  model: 'qwen3-vl',
  stream: false,
  image,
  options: {
    temperature: 0.15,
    top_p: 0.9,
  },
}, (message) => {
  console.log('实时响应:', message);
});

// saveImage
const result = await ollamaApi.generate('你好', {
  model: 'qwen3-vl',
  stream: false,
  options: {
    temperature: 0.15,
    top_p: 0.9,
  },
}, (message) => {
  console.log('实时响应:', message);
});
saveImage(result.image);

API 文档

startApi(apiType, userConfig)

初始化 LLM API 客户端。

参数:

  • apiType: 'ollama''openai' - 指定要使用的 API 类型
  • userConfig: 对象 - 自定义 API 接口配置,如 apiKey、baseURL、fetch 等
  • userOption: 对象 - 通用模型参数配置

返回: API 客户端实例,包含以下方法:

Ollama 方法

  • generate(prompt, configs, callback): 文本生成
  • chat(prompt, configs, callback): 聊天对话
  • responses(prompt, configs, callback): 结构化响应

OpenAI 方法

  • chat(prompt, configs, callback): 聊天对话
  • responses(prompt, configs, callback): 结构化响应

通用参数

  • prompt: 字符串或消息数组 - 输入提示
  • configs: 对象 - 模型参数配置
    • model: 模型名称
    • stream: 是否流式响应(默认: false)
    • system: 系统提示(聊天模式)
    • options: 其他模型参数(OpenAI 可使用 extra_body
      • temperature: 生成温度(0-1)
      • top_p: 核采样概率
  • callback: 函数 - 流式响应回调

配置

默认配置

项目提供了默认配置,可以通过环境变量或参数覆盖:

Ollama 默认配置:

{
  apiKey: process.env.OLLM_API_KEY || '1234',
  host: process.env.OLLM_API_HOST || 'http://localhost:11434',
  params: {
    // keep_alive: -1,
  },
  options: {
    // temperature: 0.6,
  }
}

OpenAI 默认配置:

{
  apiKey: process.env.LLM_API_KEY || '1234',
  baseURL: process.env.LLM_API_BASEURL || 'http://localhost:11434/v1',
  params: {
    // temperature: 1,
  },
  options: {
    // thinking: true,
  }
}

环境变量

支持通过环境变量配置 API 密钥和地址:

# Ollama
export OLLM_API_KEY="your-key"
export OLLM_API_HOST="http://localhost:11434"

# OpenAI
export LLM_API_KEY="your-key"
export LLM_API_BASEURL="https://api.openai.com/v1"

示例

查看 example.js 了解完整用法示例。

贡献

欢迎提交 Issue 和 Pull Request。

许可证

MIT License © ahyiru

联系