npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

udecide

v0.1.2

Published

Six small text classifiers and a train() primitive that run in the browser or in Node and return calibrated probabilities.

Downloads

334

Readme

udecide

Six small text classifiers and a train() primitive, all of which run in the browser or in Node and return a calibrated probability you can compare and threshold like any other number.

import { spam, intent, grade, train } from 'udecide'

await spam('BUY NOW!!! click here')        // 0.96
await intent('my card was charged twice')  // 'billing'
await grade('positive', 'increase')        // 0.89

Install

npm install udecide

The library runs in Node 22 and higher, in modern browsers, in Bun, in Deno, and inside a Cloudflare Worker, with @huggingface/transformers as the only runtime dependency.

The catalog

Six pre-trained tools, each one a single named import.

| tool | what it does | first call | |---|---|---| | spam | comment-spam probability | 23 MB shared | | intent | route into billing, support, sales, shipping, other | 23 MB shared | | sentiment | positive, neutral, negative | 23 MB shared | | toxicity | abuse probability | 23 MB shared | | pii | personal-information detector | 23 MB shared | | grade | "do these two answers mean the same thing" | 65 MB |

The first five share one sentence-encoder model that downloads about twenty three megabytes on the first call and stays cached after, while grade loads its own cross-encoder of about sixty five megabytes because direction questions and antonym discrimination need a model that scores pairs jointly rather than computing a cosine between two embeddings. A reader who imports every tool and exercises every one downloads about eighty eight megabytes once and runs locally for the rest of the visit.

Train your own

import { train, load } from 'udecide'

const classify = await train([
  { text: 'great product', label: 'positive' },
  { text: 'broke immediately', label: 'negative' },
  // around thirty more
])

await classify('works as expected')   // 'positive'

const head = classify.export()
const reloaded = await load(head)

train() takes around thirty labeled examples for a binary task or around fifty for a multiclass one, fits a head on top of the sentence encoder using an eighty twenty stratified split for the holdout, calibrates the scores so a 0.7 actually means around seventy percent confidence on the held out test set, and returns a callable closure you can save to disk and reload later. When the classes do not separate, the trainer throws a TrainingError that lists the misclassified examples and the likely causes, which is the only honest thing to do when the underlying signal is not there.

Scores

Every score this library returns is a real probability in [0, 1] rather than a raw model output, which means a 0.7 from any tool can be compared with a 0.7 from any other tool without having to remember which one came out of which sigmoid, and the standard pattern is a single threshold.

const isSpam = (await spam(text)) > 0.7

What it cannot do

The library is the right tool when the rule you are trying to encode is "this looks like that" and the alternative you would otherwise be writing is a regular expression, a switch on keyword presence, or a string equality check that has started lying. It is the wrong tool for problems that require parsing, like deciding whether a string is valid Python, and for problems that require step by step reasoning, like working out whether a proof is correct, because a sentence encoder collapses both of those into a single vector and loses the information that mattered. The default models are also tuned for English, so any application that needs to classify other languages should swap the encoder with setEmbedder to one of the multilingual variants documented under the embedders concept page before expecting any of this to calibrate sensibly on its own corpus.

CLI

udecide train ./examples.jsonl --out ./my-head.json
udecide test  ./my-head.json --input "..." --expected "..."
udecide info  ./my-head.json

More

The full documentation lives under docs/ and the seven runnable examples under examples/ cover the patterns most applications actually need, including the screenshot grader at examples/grader-screenshot/ which demonstrates the fix for the specific bug this library was originally written to address.

License

MIT.