npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@shadokan/token.js

v0.5.2

Published

(fork of token.js) Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format.

Readme

Token.js

Note: This is a fork of the original token.js package.

Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.

Features

  • Use OpenAI's format to call 200+ LLMs from 10+ providers.
  • Supports tools, JSON outputs, image inputs, streaming, and more.
  • Runs completely on the client side. No proxy server needed.
  • Free and open source under MIT.

Supported Providers

  • AI21
  • Anthropic
  • AWS Bedrock
  • Cohere
  • Gemini
  • Groq
  • Mistral
  • OpenAI
  • Perplexity
  • OpenRouter
  • Any other model provider with an OpenAI compatible API

Documentation

Setup

Installation

npm install token.js

Usage

Import the Token.js client and call the create function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.

OPENAI_API_KEY=<openai api key>
import { TokenJS } from 'token.js'

// Create the Token.js client
const tokenjs = new TokenJS()

async function main() {
  // Create a model response
  const completion = await tokenjs.chat.completions.create({
    // Specify the provider and model
    provider: 'openai',
    model: 'gpt-4o',
    // Define your message
    messages: [
      {
        role: 'user',
        content: 'Hello!',
      },
    ],
  })
  console.log(completion.choices[0])
}
main()

Using an Unlisted Model

If you need to use a model that's not in Token.js's predefined list (such as a fine-tuned model or a newly released one), you can use the byPassModelCheck option:

// Initialize with model check bypassed
const tokenjs = new TokenJS({
  byPassModelCheck: true
})

// Now you can use any model name
await tokenjs.chat.completions.create({
  provider: 'bedrock',
  model: 'us.anthropic.claude-3-sonnet', // Would normally throw an error if unlisted
  messages: [
    { role: 'user', content: 'Hello!' }
  ]
})

When using TypeScript, you may need to cast unlisted model names:

model: 'your-custom-model' as any

Access Credentials

We recommend using environment variables to configure the credentials for each LLM provider.

# OpenAI
OPENAI_API_KEY=
# AI21
AI21_API_KEY=
# Anthropic
ANTHROPIC_API_KEY=
# Cohere
COHERE_API_KEY=
# Gemini
GEMINI_API_KEY=
# Groq
GROQ_API_KEY=
# Mistral
MISTRAL_API_KEY=
# Perplexity
PERPLEXITY_API_KEY=
# OpenRouter
OPENROUTER_API_KEY=
# AWS Bedrock
AWS_REGION_NAME=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
# OpenAI Compatible
OPENAI_COMPATIBLE_API_KEY=

Streaming

Token.js supports streaming responses for all providers that offer it.

import { TokenJS } from 'token.js'

const tokenjs = new TokenJS()

async function main() {
  const result = await tokenjs.chat.completions.create({
    stream: true,
    provider: 'openai',
    model: 'gpt-4o',
    messages: [
      {
        role: 'user',
        content: `Tell me about yourself.`,
      },
    ],
  })

  for await (const part of result) {
    process.stdout.write(part.choices[0]?.delta?.content || '')
  }
}
main()

Function Calling

Token.js supports the function calling tool for all providers and models that offer it.

import { TokenJS, ChatCompletionTool } from 'token.js'

const tokenjs = new TokenJS()

async function main() {
  const tools: ChatCompletionTool[] = [
    {
      type: 'function',
      function: {
        name: 'get_current_weather',
        description: 'Get the current weather in a given location',
        parameters: {
          type: 'object',
          properties: {
            location: {
              type: 'string',
              description: 'The city and state, e.g. San Francisco, CA',
            },
          },
          required: ['location'],
        },
      },
    },
  ]

  const result = await tokenjs.chat.completions.create({
    provider: 'gemini',
    model: 'gemini-1.5-pro',
    messages: [
      {
        role: 'user',
        content: `What's the weather like in San Francisco?`,
      },
    ],
    tools,
    tool_choice: 'auto',
  })

  console.log(result.choices[0].message.tool_calls)
}
main()

Feature Compatibility

This table provides an overview of the features that Token.js supports from each LLM provider.

| Provider | Chat Completion | Streaming | Function Calling Tool | JSON Output | Image Input | | ---------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | | OpenAI | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Anthropic | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Bedrock | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Mistral | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :heavy_minus_sign: | | Cohere | :white_check_mark: | :white_check_mark: | :white_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | | AI21 | :white_check_mark: | :white_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | | Gemini | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Groq | :white_check_mark: | :white_check_mark: | :heavy_minus_sign: | :white_check_mark: | :heavy_minus_sign: | | Perplexity | :white_check_mark: | :white_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | | OpenRouter | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | OpenAI Compatible | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |

Legend

| Symbol | Description | |--------------------|---------------------------------------| | :white_check_mark: | Supported by Token.js | | :heavy_minus_sign: | Not supported by the LLM provider, so Token.js cannot support it |

Note: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our LLM provider documentation.

Contributing

See our Contributing guide to learn how to contribute to Token.js.

Issues

Please let us know if there's any way that we can improve Token.js by opening an issue!

License

Token.js is free and open source software licensed under MIT.